Oh wonderful! It was quite interesting to know about Granite LLM and how it can assist one to create better codes with the help of artificial intelligence. You have done a great job in breaking down the features and advantages and making it very clear to me. I can better understand the possibility of this tool to enhance my performance. Thanks for the useful information )
@longboardfella5306Ай бұрын
Does it produce the same snippets each time? One issue with LLM at present is their variability of output - which diminishes trust
@hansgruber3495Ай бұрын
I think that's just a matter of the temperature parameter, afaik at 0 the LLM should produce the same output each time.
@christopherd.winnan8701Ай бұрын
Can I use this to work on my Metaculus prediction bot?
@JikeWimblikАй бұрын
Could you not have a small language model during chain of thought draw from a dataset of 28000 or so colours then have a lower dimensional layer to do translation and context layout. The data set and slm and lower data will have to be good enough at first then you can get to technical tweaking. A high tops processor doing a small language model with bigger data set on system memory not vram using quickest known circuits in 2027 doing a 60 second long response.
@zshnАй бұрын
In all LLM used for coding, multiline code snippets should be enclosed with ``` (triple tildes). ``` ```