🚀 MetaGPT Setup: Launch a Startup with One ✍️ Prompt!

  Рет қаралды 25,616

Prompt Engineering

Prompt Engineering

Күн бұрын

Пікірлер: 40
@Drone256
@Drone256 Жыл бұрын
For a snake game you would just use one of the many publicly available code samples that implement exactly what you want. A really interesting example would be to try and use it to engineer something that does not yet exist. It also needs a feedback loop where it takes suggestions for changes, modifies the project and then gives you a new result.
@engineerprompt
@engineerprompt Жыл бұрын
I totally agree with you on this, I am trying to create a project with its help which I hope to show case in another video. The feedback loop is missing and at the moment it doesn’t have to ability to modify existing code base. These will supercharge its usability
@py_man
@py_man Жыл бұрын
​@@engineerpromptTraceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/tenacity-8.2.2-py3.9.egg/tenacity/_asyncio.py", line 50, in __call__ result = await fn(*args, **kwargs) File "/home/user/Documents/metagpt/metagpt/actions/action.py", line 62, in _aask_v1 instruct_content = output_class(**parsed_data) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 5 validation errors for prd Requirement Pool -> 0 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 1 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 2 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 3 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 4 value is not a valid tuple (type=type_error.tuple) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/Documents/metagpt/startup.py", line 42, in fire.Fire(main) File "/usr/local/lib/python3.9/dist-packages/fire-0.4.0-py3.9.egg/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/usr/local/lib/python3.9/dist-packages/fire-0.4.0-py3.9.egg/fire/core.py", line 466, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/usr/local/lib/python3.9/dist-packages/fire-0.4.0-py3.9.egg/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/home/user/Documents/metagpt/startup.py", line 38, in main asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/home/user/Documents/metagpt/startup.py", line 26, in startup await company.run(n_round=n_round) File "/home/user/Documents/metagpt/metagpt/software_company.py", line 60, in run await self.environment.run() File "/home/user/Documents/metagpt/metagpt/environment.py", line 67, in run await asyncio.gather(*futures) File "/home/user/Documents/metagpt/metagpt/roles/role.py", line 240, in run rsp = await self._react() File "/home/user/Documents/metagpt/metagpt/roles/role.py", line 209, in _react return await self._act() File "/home/user/Documents/metagpt/metagpt/roles/role.py", line 168, in _act response = await self._rc.todo.run(self._rc.important_memory) File "/home/user/Documents/metagpt/metagpt/actions/write_prd.py", line 145, in run prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING) File "/usr/local/lib/python3.9/dist-packages/tenacity-8.2.2-py3.9.egg/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/tenacity-8.2.2-py3.9.egg/tenacity/_asyncio.py", line 47, in __call__ do = self.iter(retry_state=retry_state) File "/usr/local/lib/python3.9/dist-packages/tenacity-8.2.2-py3.9.egg/tenacity/__init__.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[] 😢😢😢 Please help
@JohnChristosMolura
@JohnChristosMolura Жыл бұрын
i think Llama has more potential in this aspect. Have not inspected the model file but if it is open all the way down, a rnn wrapped llama2 that is supervised only after the new weights have been stored by another panel of judges say gpt4, llama2 and bard to decide on the supervision. have a pet want às well although it may backfire, but an experiment feeding the 3 judges feedback direct. . and adding another limited run of a sage or monk that teaches llama2 to decide for itself even woth the trio in the bg ... only use info to guide not conform...
@SarkarAniruddha
@SarkarAniruddha Жыл бұрын
Is there any way to avoid gpt-4/openai platforms? I wanted to use llama or falco for complete opensource
@v.gedace1519
@v.gedace1519 Жыл бұрын
Good video! Thanks for your effort. But it would be good to compare it to AutoGen or minimum to point out that there is a big brother called AutoGen ... And: Dont know the exact publishing dates but it seems to be an AutoGen clone - with the same issues: - Expensive - Just one A.I. at one time - You aren´t able to define which agent uses which A.I. - No support for free A.I.´s (like Llama 2, etc.) - MetaGPT: the number of agents are predefined/limited while AutoGen is open for that (and includes also agent grouping) As always: Keep an eye also on this and wait for further versions!
@Anorch-oy9jk
@Anorch-oy9jk 6 ай бұрын
so is there way to use that framework for other things than creating projects? Like I need a analysis team for analyzing a data from a database?
@cudaking777
@cudaking777 Жыл бұрын
Thank you for the content, can you confirm if the output can be modified for different requirements such as solution design documentation or architect design.
@ldandco
@ldandco Жыл бұрын
If this works half ok , then it is the end of SAAS
@cudaking777
@cudaking777 Жыл бұрын
I was trying to create similar approach by using langchain agents to create multiple agents for different task the cordination ans reduceing thebcost was the ricky part. I see this agreate tool for project all the documentation, procedures, project timelines, delivarbles that qill cut time and one person a can create everything
@engineerprompt
@engineerprompt Жыл бұрын
At the moment, this doesn’t seem to have the ability to modify existing code base
@ianm00n
@ianm00n Жыл бұрын
I have dream about starting my game development studio, this could make it easier because doing solo is hard.
@hamidg
@hamidg Жыл бұрын
Is there something like this but for llama2 or any other open source llm?
@engineerprompt
@engineerprompt Жыл бұрын
I am not aware of anything with open source LLMs
@ilianos
@ilianos Жыл бұрын
There are services that use a "fake Openai.api_key" but you can get results from that API that are generated by open source LLMs (still not local). Forgot the name of that API...
@TheJazzWizard_
@TheJazzWizard_ Жыл бұрын
Depends on how you run your local models. If you run an inference server that copies OpenAI's API, then you should be able to edit the config.yaml OpenAI Proxy to localhost-colon-port number and add a fake API key (shouldn't matter what you write in the key). I would recommend using a CodeLlama variant to do this though. If you don't have the power to run the 34b model, it may not work very well/probably won't work at all. If you do have the power, and you want an easy way to run an inference server, that uses the OpenAI API I would recommend LM Studio. My comment keeps getting filtered for the localhost url. Hopefully this edit makes it through.
@arjunreddy8358
@arjunreddy8358 Жыл бұрын
@@ilianos could please help with the name, so that it will be easy to use
@xinyancai8244
@xinyancai8244 Жыл бұрын
Cool video, thank you! Will you try lauching some more other cases?
@engineerprompt
@engineerprompt Жыл бұрын
Yes, that’s the plan.
@marcusmayer1055
@marcusmayer1055 Жыл бұрын
How to specify my own key instead of the gpt4 api key where the local gpt is running on my PC
@MoFields
@MoFields Жыл бұрын
Would be great if you cover Jais - I tried to run it without success 😅😅
@engineerprompt
@engineerprompt Жыл бұрын
Can you share a link :)
@seraphin01
@seraphin01 Жыл бұрын
Thanks for the video, I don't think I'll be touching that just yet but it's promising for the months/years to come though. I gotta admit I need to read up more on py and dockers and all cos at the moment I just follow tutorials but I don't really have an idea of what I'm doing which is frustrating
@faff
@faff Жыл бұрын
Ask gpt to explain the parts you don't understand. It's one of the things it does best.
@picklenickil
@picklenickil Жыл бұрын
I hope it doesnt end up like other "PARTY TRICK" projects. which cant handle anything more complex that a snake game
@mdaslamknl
@mdaslamknl Жыл бұрын
Excellent
@officialhush
@officialhush Жыл бұрын
Holy fucking god
@yevgenyrad
@yevgenyrad Жыл бұрын
"launch a startup in one line" meanwhile can't even create a working snake game
@thinkhatke4020
@thinkhatke4020 Жыл бұрын
Hehe
@AviralBajpai
@AviralBajpai 11 ай бұрын
Was metagpt created with metagpt or not? 😂
@engineerprompt
@engineerprompt 11 ай бұрын
Probably 😂😂
@TheBlackClockOfTime
@TheBlackClockOfTime Жыл бұрын
Become very disappointed with one prompt.
@MrGluepower
@MrGluepower 7 ай бұрын
Anyone know how to pass this stage? (metagpt) C:\>npm install -g @mermaid-js/mermaid-cli npm WARN deprecated puppeteer@19.11.1: < 21.8.0 is no longer supported [##################] - reify:puppeteer-core: timing reifyNode:node_modules/@mermaid-js/mermaid-cli/node_modules/chromiu
@WeRise562
@WeRise562 2 ай бұрын
hey man did u solve this issue
ChatGPT: One Model to Rule them All - Now Even more useful
8:14
Prompt Engineering
Рет қаралды 9 М.
LocalGPT Updates - Tips & Tricks
19:38
Prompt Engineering
Рет қаралды 23 М.
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 15 МЛН
SmartGPT: Make ChatGPT Smarter
12:57
Prompt Engineering
Рет қаралды 10 М.
Open Interpreter: Run ChatGPT Code Interpreter locally with CodeLlama
10:44
Prompt Engineering
Рет қаралды 50 М.
Is GPT Engineer Actually Useful? 🤨
17:33
ArjanCodes
Рет қаралды 33 М.
This Prompt Makes Your Prompts 10X BETTER
9:02
metricsmule
Рет қаралды 62 М.
Talk to Your Documents, Powered by Llama-Index
17:32
Prompt Engineering
Рет қаралды 86 М.