/root/code/InternLM
Note: switching to '3028f07cb79e5b1d7342f4ad8d11efad3fd13d17'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 3028f07 fix(readme): update README with original weight download link (#460)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto') model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语). - InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless. - InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文. """
messages = [(system_prompt, '')]
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
%%writefile /root/code/InternLM/web_demo_user.py """ This script refers to the dialogue example of streamlit, the interactive generation code of chatglm2 and transformers. We mainly modified part of the code logic to adapt to the generation of our model. Please refer to these links below for more information: 1. streamlit chat example: https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps 2. chatglm2: https://github.com/THUDM/ChatGLM2-6B 3. transformers: https://github.com/huggingface/transformers """
from dataclasses import asdict
import streamlit as st import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.utils import logging
from tools.transformers.interface import GenerationConfig, generate_interactive
# Initialize chat history if"messages"notin st.session_state: st.session_state.messages = []
# Display chat messages from history on app rerun for message in st.session_state.messages: with st.chat_message(message["role"], avatar=message.get("avatar")): st.markdown(message["content"])
# Accept user input if prompt := st.chat_input("What is up?"): # Display user message in chat message container with st.chat_message("user", avatar=user_avator): st.markdown(prompt) real_prompt = combine_history(prompt) # Add user message to chat history st.session_state.messages.append({"role": "user", "content": prompt, "avatar": user_avator})
with st.chat_message("robot", avatar=robot_avator): message_placeholder = st.empty() for cur_response in generate_interactive( model=model, tokenizer=tokenizer, prompt=real_prompt, additional_eos_token_id=103028, **asdict(generation_config), ): # Display robot response in chat message container message_placeholder.markdown(cur_response + "▌") message_placeholder.markdown(cur_response) # Add robot response to chat history st.session_state.messages.append({"role": "robot", "content": cur_response, "avatar": robot_avator}) torch.cuda.empty_cache()
/root/code/InternLM
/root/.conda/envs/internlm-chat/lib/python3.10/site-packages/IPython/core/magics/osm.py:417: UserWarning: using dhist requires you to install the `pickleshare` library.
self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.
You can now view your Streamlit app in your browser.
URL: http://127.0.0.1:6006
load model begin.
Loading checkpoint shards: 100%|██████████████████| 8/8 [00:25<00:00, 3.20s/it]
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 0.00 MB. The target location /root/.cache/huggingface/hub only has 0.00 MB free disk space.
warnings.warn(
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 0.00 MB. The target location /root/.cache/huggingface/hub/models--internlm--internlm-chat-7b/blobs only has 0.00 MB free disk space.
warnings.warn(
tokenizer_config.json: 343B [00:00, 31.9kB/s]
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 0.01 MB. The target location /root/.cache/huggingface/hub only has 0.00 MB free disk space.
warnings.warn(
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 0.01 MB. The target location /root/.cache/huggingface/hub/models--internlm--internlm-chat-7b/blobs only has 0.00 MB free disk space.
warnings.warn(
tokenization_internlm.py: 8.95kB [00:00, 35.6MB/s]
A new version of the following files was downloaded from https://huggingface.co/internlm/internlm-chat-7b:
- tokenization_internlm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 1.66 MB. The target location /root/.cache/huggingface/hub only has 0.00 MB free disk space.
warnings.warn(
/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 1.66 MB. The target location /root/.cache/huggingface/hub/models--internlm--internlm-chat-7b/blobs only has 0.00 MB free disk space.
warnings.warn(
tokenizer.model: 100%|█████████████████████| 1.66M/1.66M [00:00<00:00, 2.92MB/s]
special_tokens_map.json: 95.0B [00:00, 18.2kB/s]
load model end.
load model begin.
load model end.
load model begin.
load model end.
^C
Stopping...
/root/code/lagent
Note: switching to '511b03889010c4811b1701abb153e02b8e94fb5e'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 511b038 update header-logo (#72)
%%writefile /root/code/lagent/examples/react_web_demo_user.py import copy import os
import streamlit as st from streamlit.logger import get_logger
from lagent.actions import ActionExecutor, GoogleSearch, PythonInterpreter from lagent.agents.react import ReAct from lagent.llms import GPTAPI from lagent.llms.huggingface import HFTransformerCasualLM
defsetup_sidebar(self): """Setup the sidebar for model and plugin selection.""" model_name = st.sidebar.selectbox( '模型选择:', options=['gpt-3.5-turbo','internlm']) if model_name != st.session_state['model_selected']: model = self.init_model(model_name) self.session_state.clear_state() st.session_state['model_selected'] = model_name if'chatbot'in st.session_state: del st.session_state['chatbot'] else: model = st.session_state['model_map'][model_name]
plugin_action = [ st.session_state['plugin_map'][name] for name in plugin_name ] if'chatbot'in st.session_state: st.session_state['chatbot']._action_executor = ActionExecutor( actions=plugin_action) if st.sidebar.button('清空对话', key='clear'): self.session_state.clear_state() uploaded_file = st.sidebar.file_uploader( '上传文件', type=['png', 'jpg', 'jpeg', 'mp4', 'mp3', 'wav']) return model_name, model, plugin_action, uploaded_file
definit_model(self, option): """Initialize the model based on the selected option.""" if option notin st.session_state['model_map']: if option.startswith('gpt'): st.session_state['model_map'][option] = GPTAPI( model_type=option) else: st.session_state['model_map'][option] = HFTransformerCasualLM( '/root/model/Shanghai_AI_Laboratory/internlm-chat-7b') return st.session_state['model_map'][option]
definitialize_chatbot(self, model, plugin_action): """Initialize the chatbot with the given model and plugin actions.""" return ReAct( llm=model, action_executor=ActionExecutor(actions=plugin_action))
defrender_user(self, prompt: str): with st.chat_message('user'): st.markdown(prompt)
defrender_assistant(self, agent_return): with st.chat_message('assistant'): for action in agent_return.actions: if (action): self.render_action(action) st.markdown(agent_return.response)
# Initialize chatbot if it is not already initialized # or if the model has changed if'chatbot'notin st.session_state or model != st.session_state[ 'chatbot']._llm: st.session_state['chatbot'] = st.session_state[ 'ui'].initialize_chatbot(model, plugin_action)
for prompt, agent_return inzip(st.session_state['user'], st.session_state['assistant']): st.session_state['ui'].render_user(prompt) st.session_state['ui'].render_assistant(agent_return) # User input form at the bottom (this part will be at the bottom) # with st.form(key='my_form', clear_on_submit=True):
if user_input := st.chat_input(''): st.session_state['ui'].render_user(user_input) st.session_state['user'].append(user_input) # Add file uploader to sidebar if uploaded_file: file_bytes = uploaded_file.read() file_type = uploaded_file.type if'image'in file_type: st.image(file_bytes, caption='Uploaded Image') elif'video'in file_type: st.video(file_bytes, caption='Uploaded Video') elif'audio'in file_type: st.audio(file_bytes, caption='Uploaded Audio') # Save the file to a temporary location and get the path file_path = os.path.join(root_dir, uploaded_file.name) withopen(file_path, 'wb') as tmpfile: tmpfile.write(file_bytes) st.write(f'File saved at: {file_path}') user_input = '我上传了一个图像,路径为: {file_path}. {user_input}'.format( file_path=file_path, user_input=user_input) agent_return = st.session_state['chatbot'].chat(user_input) st.session_state['assistant'].append(copy.deepcopy(agent_return)) logger.info(agent_return.inner_steps) st.session_state['ui'].render_assistant(agent_return)
/root/code
Cloning into 'InternLM-XComposer'...
/root/.conda/envs/internlm-chat/lib/python3.10/site-packages/IPython/core/magics/osm.py:417: UserWarning: using dhist requires you to install the `pickleshare` library.
self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
remote: Enumerating objects: 680, done.
remote: Counting objects: 100% (680/680), done.
remote: Compressing objects: 100% (273/273), done.
remote: Total 680 (delta 361), reused 680 (delta 361), pack-reused 0
Receiving objects: 100% (680/680), 10.74 MiB | 8.78 MiB/s, done.
Resolving deltas: 100% (361/361), done.
/root/code/InternLM-XComposer
Note: switching to '3e8c79051a1356b9c388a6447867355c0634932d'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 3e8c790 add polar in readme
/root/code/InternLM-XComposer
/root/.conda/envs/internlm-chat/lib/python3.10/site-packages/IPython/core/magics/osm.py:417: UserWarning: using dhist requires you to install the `pickleshare` library.
self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
Init VIT ... Done
Init Perceive Sampler ... Done
Init InternLM ... Done
Loading checkpoint shards: 100%|██████████████████| 4/4 [00:25<00:00, 6.37s/it]
load model done: <class 'transformers_modules.internlm-xcomposer-7b.modeling_InternLM_XComposer.InternLMXComposerForCausalLM'>
/root/code/InternLM-XComposer/examples/web_demo.py:1068: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
chat_textbox = gr.Textbox(
Running on local URL: http://0.0.0.0:6006
init
Could not create share link. Missing file: /root/.conda/envs/internlm-chat/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2.
Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:
1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /root/.conda/envs/internlm-chat/lib/python3.10/site-packages/gradio
<object object at 0x7efdf06ec340>
郁金香(学名:Tulipa gesneriana L.)是百合科郁金香属植物,又名洋荷花、草麝香等。原产于地中海沿岸以及西亚和南西伯利亚的半干旱或高寒地区。荷兰人最早将郁金香作为观赏花卉;16世纪中叶,郁金香被引入中国;17世纪传入欧洲各国。
郁金香花色丰富,有红、橙、黄、紫、白、黑、双色及镶边等多种颜色,而且同一植株上可呈现不同色彩的花朵。花朵硕大艳丽,富丽堂皇,芳香四溢,给人以庄重、华贵、富丽之感。它不仅具有很高的观赏价值,而且还有较高的经济作物品种开发利用价值。
<TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1>
郁金香(学名:Tulipa gesneriana L.)是百合科郁金香属植物,又名洋荷花、草麝香等。原产于地中海沿岸以及西亚和南西伯利亚的半干旱或高寒地区。荷兰人最早将郁金香作为观赏花卉;16世纪中叶,郁金香被引入中国;17世纪传入欧洲各国。
郁金香花色丰富,有红、橙、黄、紫、白、黑、双色及镶边等多种颜色,而且同一植株上可呈现不同色彩的花朵。花朵硕大艳丽,富丽堂皇,芳香四溢,给人以庄重、华贵、富丽之感。它不仅具有很高的观赏价值,而且还有较高的经济作物品种开发利用价值。
<TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1>
适合插入图像的行是<Seg0>, <Seg2>.
[0, 2]
郁金香的花朵,颜色丰富多样。
郁金香的花朵,花朵硕大艳丽。
{0: '郁金香的花朵,颜色丰富多样。', 2: '郁金香的花朵,花朵硕大艳丽。'}
{0: '郁金香的花朵,颜色丰富多样。', 2: '郁金香的花朵,花朵硕大艳丽。'}
https://static.openxlab.org.cn/lingbi/jpg-images/105d05c6bc63e3f446c715f10b1c5bb349e09c1e2860fa2d510d0aabde193a1a.jpg
download image with url
image downloaded
https://static.openxlab.org.cn/lingbi/jpg-images/11eba488365ed9c830601ab473788c82b6fda279d05c2485227ee8cb089b2f51.jpg
download image with url
image downloaded
model_select_image
0 郁金香(学名:Tulipa gesneriana L.)是百合科郁金香属植物,又名洋荷花、草麝香等。原产于地中海沿岸以及西亚和南西伯利亚的半干旱或高寒地区。荷兰人最早将郁金香作为观赏花卉;16世纪中叶,郁金香被引入中国;17世纪传入欧洲各国。
<div align="center"> <img src="file=articles/如何培育郁金香/temp_1000_0.png" width = 500/> </div>
1 郁金香花色丰富,有红、橙、黄、紫、白、黑、双色及镶边等多种颜色,而且同一植株上可呈现不同色彩的花朵。花朵硕大艳丽,富丽堂皇,芳香四溢,给人以庄重、华贵、富丽之感。它不仅具有很高的观赏价值,而且还有较高的经济作物品种开发利用价值。
2 <TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1><TOKENS_UNUSED_1>
<div align="center"> <img src="file=articles/如何培育郁金香/temp_1002_2.png" width = 500/> </div>
^C
Keyboard interruption in main thread... closing server.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: huggingface_hub in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (0.20.2)
Requirement already satisfied: filelock in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (3.13.1)
Requirement already satisfied: fsspec>=2023.5.0 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (2023.12.2)
Requirement already satisfied: requests in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (2.31.0)
Requirement already satisfied: tqdm>=4.42.1 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (4.66.1)
Requirement already satisfied: pyyaml>=5.1 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (6.0.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (4.9.0)
Requirement already satisfied: packaging>=20.9 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from huggingface_hub) (23.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from requests->huggingface_hub) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from requests->huggingface_hub) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from requests->huggingface_hub) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in /root/.conda/envs/internlm-chat/lib/python3.10/site-packages (from requests->huggingface_hub) (2023.11.17)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Note: you may need to restart the kernel to use updated packages.
Consider using `hf_transfer` for faster downloads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.
./config.json
1
import json
1 2 3
withopen('./config.json', 'r') as jf: config = json.load(jf) config