getting started with typespec
28:51
Пікірлер
@bjarnenilsson80
@bjarnenilsson80 2 күн бұрын
"PHP was created for the browser" funny that it doesn't run in the browservatvall then, in fact id nevervtouches the cloent mschinevat all afaik it's serverecside, did you mean for "thevweb "?
@smilefaxxe2557
@smilefaxxe2557 3 күн бұрын
Awesome video and great explanation, thank you!!! ❤🔥 Just in case someone else is having the same problem as I had: "ld hello.o -o hello -lto_library System -syslibroot `xcrun -sdk macosx --show-sdk-path` -e _main -arch arm64" resulted in: "ld: dynamic main executables must link with libSystem.dylib for architecture arm64" I just had to append " -lSystem" at the end and it worked 👍
@coolmcdude
@coolmcdude 4 күн бұрын
Thank you so much for this video! It helped me add MCP tools to my node JS local LLM project.
@LaksAIChannel-zy3do
@LaksAIChannel-zy3do 4 күн бұрын
Really interesting and informative! I too tried Image Puzzles on both o1 and Flash Thinking, and both had a tough time: kzbin.info/www/bejne/d2GqhKyneJ6GeJYsi=n1uRaxMq4L6u8Ta5
@LaksAIChannel-zy3do
@LaksAIChannel-zy3do 4 күн бұрын
Also interesting to see that Non Thinking version is actually giving good results for some cases
@isaatalay5320
@isaatalay5320 5 күн бұрын
i wish you do this video step by step.. Because i dont understand anyting
@RomuloMagalhaesAutoTOPO
@RomuloMagalhaesAutoTOPO 6 күн бұрын
Wow... very good and exploratory analysis. Thank you very much.👍 You reinforce the use specific model to specific problem, for example TOC to Agents... Thanks to remember this!
@ahmadhassan3315
@ahmadhassan3315 6 күн бұрын
please integrate it with together api key
@hoodhommie9951
@hoodhommie9951 6 күн бұрын
Next when you test these models give it an image of an economic calendar from FXStreet and ask it to count the number of Red Bars and Orange Bars in the screenshot or Reading & extracting data from a screenshot of a Rogers and Mayhew Steam Table. I think image analysis is still a critical area these models need to work on.
@Jorsten
@Jorsten 7 күн бұрын
Which is better at coding, 1206 or flash thinking? I'm not too sure yet about the superiority of CoT for coding in general.
@PianothShaveck
@PianothShaveck 7 күн бұрын
9:20 : why do you believe position 5 is the right answer? O loses: O . . . . X . X . (your starting position) O . . . O X . X . (O places on position 5, what you believe to be the solution) O . . . O X . X X (blocking O, now X has two ways to win, on the third column, and third row) O . O . O X . X X (blocking one of the two ways, that's the best O can do) O . O . O X X X X (X wins) The *actual* best continuation is as follows (there are two possible correct solutions, in position 3, and position 7): O . . . . X . X . (your starting position) O . O . . X . X . (forcing X to block a tris) O X O . . X . X . (only move X can do is to block the tris, threating tris on the second column) O X O . O X . X . (O blocks the tris on the second column, and threatens a tris on both diagonals) O X O . O X . X X (X blocks one of the two) O X O . O X O X X (O wins by doing tris on the other diagonal) This was the continuation for position 3. For position 7 the continuation is similar, in fact it's just rotated. It's all forced moves, so you can quickly verify it yourself. So... Position 9 is wrong, but also position 5 is wrong. The only correct solutions are position 3 and position 7.
@chrishayuk
@chrishayuk 7 күн бұрын
yes, agree, 3 and 7 are the correct moves... i am as bad a tic-tac-toe as the AI
@ebandaezembe7508
@ebandaezembe7508 7 күн бұрын
good test, we hope to improve the model shortly
@seniormcyt5552
@seniormcyt5552 8 күн бұрын
what's 254*752-3+(2-7)+5? first convert it to simpler parts then start solve it This model is very good at CoT, so if you make it think step by step, it gives you the right answer. For a better test of this model, you should make it break it down and think about it as much as possible. so the System Prompt should be : always break down any questions into simpler parts, then solve it step by step.
@chrishayuk
@chrishayuk 7 күн бұрын
i shouldn't need to specify step by step for math problems... modern models know they need to do that for themselves.. so i purposely don't do it
@henry-js
@henry-js 8 күн бұрын
What are your pc specs??
@chrishayuk
@chrishayuk 7 күн бұрын
macbook pro m3 max with 128GB of unified memory
8 күн бұрын
The Ollama modal is quantized to Q4_K_M, so you will lose much quality, especially context. It is not a fair comparison. Also, you may need some guidance to force thinking which o1 and Claude 3.5 are doing but Ollama doesn't support yet.
@chrishayuk
@chrishayuk 7 күн бұрын
it's not a highly quanitized model, and i'm purposely focusing on areas where the quantization won't massively effect it. the model is not suddenly gonna gain more personality (this is a model size thing rather than quantizing), similar for the chains of thought, and similar for the code issues
7 күн бұрын
@@chrishayuk Q4 usually causes 5 to 10% drop in accuracy. Sometimes it can be up to 20%. Also, I said before, Ollama do not support forced thinking yet, so it is not possible to compare Phi-4 to Cladue 3.5 this way.
@Corteum
@Corteum 8 күн бұрын
If you're really good at using the model, could get a lot of work done with it, or does it have too many limitations and weaknesses?
@obinnaokafor6252
@obinnaokafor6252 8 күн бұрын
Models from Microsoft are really good
@chrishayuk
@chrishayuk 7 күн бұрын
yeah, it's a good model, just frustrating
@obinnaokafor6252
@obinnaokafor6252 6 күн бұрын
@chrishayuk frustrating in what sense?
@polymathcode
@polymathcode 9 күн бұрын
Thanks for this! Also for anyone running into the following error when using ld/linker manually: "ld: warning: ignoring file hello.o, building for macOS-arm64 but attempting to link with file built for macOS-x86_64" 1. remove the hello.o file 2. specify the architecture when running clang ie. clang hello.c -c -o hello.o -arch arm64 3. Then run the command as specified eg. ld hello.o -o hello -lSystem -syslibroot `xcrun -sdk macosx --show-sdk-path` -e _main -arch arm64
@polymathcode
@polymathcode 9 күн бұрын
Along the same lines if you get errors that mention unknown tokens or invalid instruction mnemonic eg. invalid instruction mnemonic "svc" add the arch flag and specify arm64 ie. as hello.s -o hello.o -arch arm64
@pmarreck
@pmarreck 9 күн бұрын
FYI, on Mac at least, the BoltAI GUI app will connect to locally-running Ollama and LM Studio models served by those apps
@artoke84
@artoke84 10 күн бұрын
in openweb ui how did you setup a dark mode? it is so useful
@chrishayuk
@chrishayuk 10 күн бұрын
please note at 13:46, mini got the answer right, as did llama3.3 at 14:11. i did point out earlier in the video, that position 2 is a correct answer. in the flow of the video i was focused on position 3, even though i had pointed out position 2 was also correct. you've all figured this for yourselves. i'm not editting this video as it doesn't change the narrative. apologies for missing this in the flow of the video
@alelondon23
@alelondon23 10 күн бұрын
13:46 GPT4o mini got a RIGHT answer. position 2 is a winning move as much as 3 (you said it yourself earlier in the video and you mistakenly discarded the GPT4o mini answer as invalid. Again on 14:11 you wrongly stated Llama 3.3 gets it wrong when it gets it right! You had 1 job! Please fix the video or carry your reputation as a shill.
@chrishayuk
@chrishayuk 10 күн бұрын
lol, you're correct, i got it wrong....
@chrishayuk
@chrishayuk 10 күн бұрын
i've put a pinned comment pointing it out
@tnypxl
@tnypxl 10 күн бұрын
The answer is always no.
@chrishayuk
@chrishayuk 7 күн бұрын
yep, it always is
@Maisonier
@Maisonier 10 күн бұрын
Great video. Liked and subscribed
@chrishayuk
@chrishayuk 7 күн бұрын
Awesome thank you!
@RafaelSales55
@RafaelSales55 10 күн бұрын
Hiii, thank your for the excelent tutorial. I was following the tutorial however I saw something strange. The desktop version saws laggy so I added the Stats component to the application. While in the browser I was having 100fps in the application I got 60 fps 😢. The idea of tauri seems very good, but it’s not good for desktop apps since it seems a downgrade of web browser version
@malllikvs
@malllikvs 10 күн бұрын
What is your hardware configuration ?
@xxxNERIxxx1994
@xxxNERIxxx1994 10 күн бұрын
Or CLINE with MCP !!
@xxxNERIxxx1994
@xxxNERIxxx1994 10 күн бұрын
Try maybe lm-studio or vllm also not supporting tools ?
@3750gustavo
@3750gustavo 10 күн бұрын
Nemotron got on first try when responding the '254*752-3+(2-7)+5': To evaluate this expression, we need to follow the order of operations, often remembered by the acronym PEMDAS (Parentheses, Exponents, Multiplication and Division from left to right, Addition and Subtraction from left to right). Here's how to break it down: Given expression: `254 * 752 - 3 + (2 - 7) + 5` 1. **Parentheses first**: Solve the expression inside the parentheses. - `(2 - 7) = -5` 2. **Update the expression**: Replace the solved parentheses with its result. - `254 * 752 - 3 - 5 + 5` 3. **Multiplication next** (since there are no exponents): - `254 * 752 = 191008` 4. **Update the expression again**: - `191008 - 3 - 5 + 5` 5. **Finally, handle Addition and Subtraction from left to right**: - `191008 - 3 = 191005` - `191005 - 5 = 191000` - `191000 + 5 = 191005` So, the final result of the expression `254 * 752 - 3 + (2 - 7) + 5` is **191005**.
@sgwong513
@sgwong513 5 күн бұрын
on PHI4, I got correct answer on first try if I give hint base on what I observe from the Nemotron output: calculate 254*752-3+(2-7)+5 To evaluate this expression, you need to follow the order of operations, often remembered by the acronym PEMDAS (Parentheses, Exponents, Multiplication and Division from left to right, Addition and Subtraction from left to right) Make sure you do the multiplication correctly, break it down and calculate multiplication step by step. Make sure you sum out multiple number correctly, break it down and calculate 2 number sum and repeat.
@thegrumpydeveloper
@thegrumpydeveloper 10 күн бұрын
I think Sam Altman said “no one wants last years model” or the second best model. Unless it’s really faster or better most will stick to one of the others. Good to see though.
@patruff
@patruff 10 күн бұрын
No tool calling? Qwen will they be able to?
@patruff
@patruff 10 күн бұрын
Google Fi, Phi, but when are they going to release Fo, and Fum? As an Englishman by blood please respond.
@chrishayuk
@chrishayuk 10 күн бұрын
Hahaha, I should release the outtakes of the intro…
@stoicescucatalin8071
@stoicescucatalin8071 10 күн бұрын
What version of OpenWebUI are you using?
@QuizmasterLaw
@QuizmasterLaw 11 күн бұрын
ollama run vanilj/Phi-4 Yes?
@chrishayuk
@chrishayuk 11 күн бұрын
yep, ollama.com/vanilj/Phi-4
@QuizmasterLaw
@QuizmasterLaw 11 күн бұрын
@@chrishayuk thanks! liked and commented probably subscribed better check n b sure
@chrishayuk
@chrishayuk 11 күн бұрын
thank you, glad you found the vid useful
@QuizmasterLaw
@QuizmasterLaw 11 күн бұрын
not yet available on huggingface library searching but if someone has a pull command from ollama please say
@thomecq
@thomecq 10 күн бұрын
ollama run vanilj/Phi-4:Q8_0
@jacquesdupontd
@jacquesdupontd 10 күн бұрын
uh ? ollama run vanilj/Phi-4:Q8_0
@DrWaldonHendricks
@DrWaldonHendricks 11 күн бұрын
I used a Gen10 A2 GPU, and it actually did a really good job on the latest model. It used about 10GB NVRAM at most compared with the o1, and it was not far of a really good model
@FalconStudioWin
@FalconStudioWin 11 күн бұрын
The 14b parameter model may give worse answers in test time compute as smaller models generally does not reason better vs larger models
@dot1298
@dot1298 11 күн бұрын
does OpenWebUI cost money? can it run *anything* ?
@dot1298
@dot1298 11 күн бұрын
or only docker-images?
@FuzailShaikh
@FuzailShaikh 10 күн бұрын
Its open source and free
@ozugru
@ozugru 10 күн бұрын
It is a frontend, connecting to your model running on some api like ollama. It’s free
@Gamatoto2038
@Gamatoto2038 8 күн бұрын
why would something that u run locally cost money
@Junon15
@Junon15 11 күн бұрын
Saved me time and heartache figuring this out the hard way. I can ask for nothing more. Thanks!
@chrishayuk
@chrishayuk 11 күн бұрын
super glad to hear it was useful
@d.d.z.
@d.d.z. 11 күн бұрын
Nice video Chris
@chrishayuk
@chrishayuk 11 күн бұрын
thaaaank you
@DriftlessCryptoToo
@DriftlessCryptoToo 11 күн бұрын
Bravo!!! 🎉🎉🎉
@jimlynch9390
@jimlynch9390 11 күн бұрын
Are you sure selecting 2 is wrong?
@husanaaulia4717
@husanaaulia4717 11 күн бұрын
At this point, Supernova Medius is better?🤔
@chrishayuk
@chrishayuk 11 күн бұрын
😂 definitely not
@Cyb3rPunk-o8h
@Cyb3rPunk-o8h 11 күн бұрын
you talk too much, can't you hit the point right away
@baskanaqua
@baskanaqua 13 күн бұрын
GJ! I want to ask u something, I added the source column but I don't know how to fine tune it..
@MicheleHjorleifsson
@MicheleHjorleifsson 13 күн бұрын
Excellent stuff, thank you. Would love to see a video on how to use JSONRpc in other projects.
@RussellAshby
@RussellAshby 13 күн бұрын
Great job Chris, any plans to support multiple servers like Claude Desktop can? I've built an MCP server that can build Powerpoint decks, I can chain with the SQLite mcp server to pull data sources from a database and dynamically build tables and charts in Powerpoint. All in one prompt
@chrishayuk
@chrishayuk 13 күн бұрын
multiple servers should work just add another --server <server_name> in addition to your existing one. i plan to make this better soon, but it should work
@RussellAshby
@RussellAshby 13 күн бұрын
@ that worked perfectly thank you. Llama 3.2 doesn’t seem that great with tools but OpenAI is working like a charm and faster than Claude too
@chrishayuk
@chrishayuk 13 күн бұрын
@@RussellAshby yeah openai is really sweet with the client, even mini
@ce22s018sivakumarks
@ce22s018sivakumarks 12 күн бұрын
@@chrishayuk I tested this feature - two servers-weather and sqlite - as suggested --server <server_name>. Both the servers are working fine. We can go on to add as many servers as we have. Maybe, a supporting list could be the better approach
@chrishayuk
@chrishayuk 12 күн бұрын
@russellashby, I agree.. I’m holding off a little on that just now, as I want to do a better of job of holding the tools in the context, at the moment it’s a brutal context stuff, and I think I can do a better job of that, but maybe I should make the change sooner rather than waiting for me to figure that out
@TaherSayed-o7l
@TaherSayed-o7l 14 күн бұрын
Ollama Need so expensive PC
@spaul-vin
@spaul-vin 16 күн бұрын
32b-instruct-q4_K_M - 23gb vram
@raveioo2521
@raveioo2521 16 күн бұрын
I was having an error while linking the hello.o file with libraries via ld command, stating System library not found, you can rectify that by using, -lto_library (in place of -l) System (rest all the same) it was really a wonderful video! can't thank you enough Chris!