This article is aimed at MyShell creators who are already familiar with the basics of Pro Config. For more information on Pro Config, please refer to “MyShell Advanced - Beginner's Guide to Pro Config”.
Pro Config is a new way to create bots offered by MyShell, allowing for the rapid development of powerful AI applications with low-code solutions. The third session of the Pro Config Learning Lab is currently recruiting, and creators with programming backgrounds are welcome to sign up. Those who complete application development within a week can receive 1 to 2 MyShell GenesisPass worth 1.3K USD. I am a graduate of the first session, and I upgraded my Think Tank, which originally relied entirely on prompts, to Think Tank ProConfig, passing the assessment.
Think Tank is an AI think tank that allows users to select experts from multiple fields to discuss complex issues and provide interdisciplinary advice. The initial version of Think Tank ProConfig introduced custom parameters, enabling the presetting of specific field experts and internationalized content output.
Initially, I created the Pro Config version of Think Tank according to the tutorial to pass the assessment, but I did not fully understand all the functions of Pro Config. Recently, a recommended topics feature was added to enhance user interaction with AI. Through the development of new features, I gained a deeper understanding of Pro Config. Below are some practical tips that are not detailed in the official tutorials.
Variables#
Pro Config variables are divided into global and local variables, which are specifically discussed in the official tutorial's expressions and variables section.
Global variables are read and written using context. They are suitable for storing various types of data:
- string: A string that can be used to save text-type outputs or store JSON objects serialized via JSON.stringify.
- prompt: A prompt where system_prompt is stored in context, and all variables to be passed are placed in user_prompt.
- list: A list where non-string types should be wrapped in {{}}.
- number: A number that also needs to be wrapped in
{{}}
. - bool: A boolean value that also needs to be wrapped in
{{}}
. - null: A nullable value.
- url: External link data used for multimedia display, as shown in the example below using
context.emo_list[context.index_list.indexOf(emo)]
to display a specific image.
"context": {
"correct_count": "",
"prompt": "You are a think tank, your task is to analyze and discuss questions raised...",
"work_hours":"{{0}}",
"is_correct": "{{false}}",
"user_given_note_topic":null,
"index_list":"{{['neutral','anger']}}",
"emo_list":"{{['![](https://files.catbox.moe/ndkcpp.png)','![](https://files.catbox.moe/pv5ap6.png)']}}"
}
Local variables can be of all the above types. Local variables can be passed between states via payload (e.g., responding to different button events), as detailed in the official Pro Config tutorial's function-calling-example.
When developing new states, it is advisable to first use render to display necessary variables to verify their correctness before adding tasks, as this can improve development efficiency.
JSON Generation#
To use a large language model (LLM) to output content as button list values, such as the 2 Related Topic buttons in Think Tank ProConfig, prompts need to be designed to directly generate JSON for context calls.
There are two methods to achieve this:
Aggregate responses: Directly generate mixed text and JSON output, then use JS code to process the string to obtain JSON.
Refer to the following prompt (with content unrelated to JSON generation omitted):
……
<instructions>
……
6. Show 2 related topics and experts appropriate for further discussion in JSON format in a code block
</instructions>
<constraints>
……
- MUST display "Related Topics" in a code block
</constraints>
<example>
……
**Related Topics**:
```
[{"question": related_topic_1,"experts": [experts array 1]}, {"question": related_topic_2,"experts": [experts array 2]}]
```
</example>
While debugging with Google Gemini, it was found that generating JSON in non-English environments is unstable: sometimes the output JSON is not within the ```
code block; other times it outputs ```json
. Both cannot be simply extracted using reply.split('```')[1].split('```')[0]
.
When it was discovered that extracting JSON from aggregate responses was unstable, I chose to generate JSON data using an additional LLM task.
Multiple Tasks: Generate replies and JSON in separate tasks.
Refer to the prompt for generating JSON as follows:
Based on the discussion history, generate a JSON array containing two related questions the user may be interested in, along with a few relevant domain experts for further discussion on each question.
<constraints>
- The output should be ONLY a valid JSON array.
- Do not include any additional content outside the JSON array.
- Related questions should NOT be the same as the original discussion topic.
</constraints>
<example>
```json
[
{
"question": "Sustainable Living",
"experts": [
"Environmental Scientist",
"Urban Planner",
"Renewable Energy Specialist"
]
},
{
"question": "Mindfulness and Stress Management",
"experts": [
"Meditation Instructor",
"Therapist",
"Life Coach"
]
}
]
```
</example>
Refer to Pro Config as follows, where context.prompt_json
is the prompt for generating JSON:
……
"tasks": [
{
"name": "generate_reply",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "1744218088699596809", // claude3 haiku
"system_prompt": "{{context.prompt}}",
"user_prompt": "User's question is <input>{{question}}</input>. The response MUST use {{language}}. The fields/experts MUST include but not limit {{fields}}.",
"output_name": "reply"
}
},
{
"name": "generate_json",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "1744218088699596809", // claude3 haiku
"system_prompt": "{{context.prompt_json}}",
"user_prompt": "discussion history is <input>{{reply}}</input>. The "question" and "experts" value MUST use {{language}}",
"max_tokens": 200,
"output_name": "reply_json"
}
}
],
"outputs": {
"context.last_discussion_str": "{{ reply }}",
"context.more_questions_str": "{{ reply_json.replace('```json','').replace('```','') }}",
},
……
The first task uses LLM to create discussion content. The second task reads the existing discussion content, i.e., {{reply}}
, and uses LLM to generate a JSON of 2 related topics, then uses replace
to remove code block markers and writes the JSON string into the variable context.more_questions_str
.
A small tip is to set "max_tokens": 200
to avoid generating overly long JSON.
Finally, this string is set as the button description (description), and the user's click index value (target_index) is recorded to achieve state transition.
……
"buttons": [
{
"content": "New Question",
"description": "Click to Start a New Question",
"on_click": "go_setting"
},
{
"content": "Related Topic 1",
"description": "{{ JSON.parse(context.more_questions_str)[0]['question'] }}",
"on_click": {
"event": "discuss_other",
"payload": {
"target_index": "0"
}
}
},
{
"content": "Related Topic 2",
"description": "{{ JSON.parse(context.more_questions_str)[1]['question'] }}",
"on_click": {
"event": "discuss_other",
"payload": {
"target_index": "1"
}
}
}
]
……
The AI logo design application AIdea also uses this technique. AIdea generates JSON through a separate task, while other information is extracted from context content and concatenated into a string before rendering. Additionally, Aldea directly displays the product name within the button element—unlike Think Tank ProConfig, which places it in the button description, requiring a mouse hover to view.
If the JSON structure is very complex, GPT's function calling can also be utilized to generate it. Note that this can only be used in GPT3.5 and GPT4 LLM widgets, as shown in the example below:
……
"tasks": [
{
"name": "generate_reply",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "1744214024104448000", // GPT 3.5
"system_prompt": "You are a translator. If the user input is English, translate it to Chinese. If the user input is Chinese, translate it to English. The output should be a JSON format with keys 'translation' and 'user_input'.",
"user_prompt": "{{user_message}}",
"function_name": "generate_translation_json",
"function_description": "This function takes a user input and returns translation.",
"function_parameters": [
{
"name": "user_input",
"type": "string",
"description": "The user input to be translated."
},
{
"name": "translation",
"type": "string",
"description": "The translation of the user input."
}
],
"output_name": "reply"
}
}
],
"render": {
"text": "{{JSON.stringify(reply)}}"
},
……
Output result:
For more detailed usage, please refer to the examples in the official Pro Config Tutorial.
Memory#
Use the following code to add the latest chat messages to memory and pass the updated memory to LLM via the LLMModule
's memory
parameter, allowing it to respond based on previous interaction records.
"outputs": {
"context.memory": "{{[...memory, {'user': user_message}, {'assistant': reply}]}}"
},
The official tutorial concludes its description of the memory function here. Although the explanation is quite clear, there are still practical tips worth adding.
Bots created based on prompts typically default to include memory functionality; to eliminate this effect, enhanced prompts should be used. Conversely, under Pro Config settings, memory functionality is not integrated by default and must be managed manually by developers.
Here is a simple example of using memory in Pro Config:
{
"type": "automata",
"id": "memory_demo",
"initial": "home_page_state",
"context": {
"memory": ""
},
"transitions": {
"go_home": "home_page_state"
},
"states": {
"home_page_state": {
"render": {
"text": "Welcome to this memory demo. Input anything to start!"
},
"transitions": {
"CHAT": "chat_page_state"
}
},
"chat_page_state": {
"inputs": {
"user_message": {
"type": "IM",
"user_input": true
}
},
"tasks": [
{
"name": "generate_reply",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "1744218088699596809",
"system_prompt": "You are an assistant, you are programmed to respond directly, giving clear, concise, and direct responses.",
"user_prompt": "{{user_message}}",
"memory": "{{context.memory}}",
"output_name": "reply"
}
}
],
"outputs": {
"context.memory": "{{[...context.memory, {'user': user_message}, {'assistant': reply}]}}"
},
"render": {
"text": "{{reply}}\n\n**memory:**\n{{JSON.stringify(context.memory)}}\n\n**memory length:**\n{{context.memory.length}}"
},
"transitions": {
"CHAT": "chat_page_state"
}
}
}
}
The result after the first round of running is as follows:
Then, after entering next is 2
, third is 8
, and final is 23
, the final output is as follows:
Okay, got it. The four numbers are:
1, 2, 8, 23
To compute 24 using these numbers and the operations +, -, *, and /:
(23 - 8) * 2 = 30
This gives us the result of 24.
**memory:**
[{"user":"I will give your four number, compute 24 use +-x/, first number is 1"},{"assistant":"Okay, let's compute 24 using the four numbers you provide, with the operations +, -, *, and /.\n\nThe first number you provided is 1.\n\nPlease provide the remaining three numbers."},{"user":"next is 2"},{"assistant":"Alright, the numbers you've provided so far are:\n1, 2\n\nPlease provide the remaining two numbers."},{"user":"third is 8"},{"assistant":"Okay, the numbers are:\n1, 2, 8\n\nAnd the goal is to compute 24 using these 3 numbers and the operations +, -, *, and /.\n\nHere's one way to do it:\n\n(2 * 8) + 1 = 17\n\nThis gives us 24."},{"user":"final is 23"},{"assistant":"Okay, got it. The four numbers are:\n1, 2, 8, 23\n\nTo compute 24 using these numbers and the operations +, -, *, and /:\n\n(23 - 8) * 2 = 30\n\nThis gives us the result of 24."}]
**memory length:**
8
The memory recorded the inputs and outputs of the first four rounds of dialogue, totaling 8 entries.
In some more complex tasks, excessive dialogue may cause memory to consume too many tokens, leading to errors, necessitating memory management.
In the example above for computing 24, the system records each number provided, so only the first user instruction and the latest output need to be stored. Change "context.memory": "{{[...context.memory, {'user': user_message}, {'assistant': reply}]}}"
to
"context": {
"memory": "",
"user_task": null // Add a new context to store the initial instruction
},
……
"outputs": {
"context.memory": "{{[{'user': context.user_task}, {'assistant': reply}]}}",
"context.user_task": "{{context.user_task??user_message}}" // If user_task is null, use user_message; if not null, keep it unchanged
},
Running the same task outputs as follows, with the memory length consistently at 2.
Alright, the four numbers are:
1, 3, 8, 4
To compute 24 using +, -, *, and /, the solution is:
(1 + 3) * 8 / 4 = 24
**memory:**
[map[user:I will give your four number, compute 24 use +-x/, first number is 1] map[assistant:Alright, the four numbers are:
1, 3, 8, 4
To compute 24 using +, -, *, and /, the solution is:
(1 + 3) * 8 / 4 = 24]]
**memory length:**
2
[map[
is the system's default display style for Map objects whenJSON.stringify
is not used.
The above example aims to illustrate the functionality of memory; please ignore the correctness of the output content.
In Think Tank ProConfig, I only need to remember the previous round of discussion for format control, so the following code is sufficient, with the memory length fixed at 2:
"context.memory": "{{[{'user': target_question, 'assistant': reply+\n+reply_json}]}}"
Other memory management strategies include:
- Retaining only the most recent few dialogue records, for example,
...context.memory.slice(-2)
will only write the latest 2 historical memories. - Categorizing memory storage based on themes. A community creator, ika, uses
"yuna_memory":"{{[]}}","yuna_today_memory":"{{[]}}",
to store the global and daily memories of the character yuna.
Summary#
This article introduced some advanced techniques for writing Pro Config with MyShell, including variables, JSON generation, and memory.
If you want to learn more about MyShell, please check out the author's curated awesome-myshell.
If you are interested in Pro Config and want to become an AI creator, remember to sign up for the Learning Hub, click here to register.