You’re really a special person. Been a joy to follow you here. Keep making these please!
@KyleKabasares_PhD10 күн бұрын
Thanks, that means a lot! 🥲
@byrnemeister200810 күн бұрын
On Salesforce. CEO says to the public “we are not hiring anyone this year”. Their job postings say different. 100’s of software engineers. Benioff is just a salesman.
@wwkk496410 күн бұрын
Thanks for sharing Kyle, the last question is the real one that will answer the others. I believe we will see big things in the coming year!
@alfrede.newman183810 күн бұрын
It's all about balance, IMHO ... @ having fun playing in the weeds in the real world but also at the same time keeping a healthy perspective as the landscape evolves. Venting on YT, like this periodically, is a good way to check your balance. BTW, really enjoy these AI topics so far. My takeaway from these videos is 'dare to imagine and ask @ the hard topics.'
@davewilson442710 күн бұрын
I was thinking about your black hole mass idea. My thought is that training on an enormous data intensive 'visual' dataset might have a different approach possible. What if you trained a model on the astrophysics principles we already know to be true, and then look for hits that are similar (k=2 maybe), but specifically don't match. That loss might be a way to train on, possibly, or possibly generate an inverse dataset from. Also, speaking as an neurodivergent adult, i feel like we could use more neurodivergent CoT datasets as the 'normie' CoT is simply too linear. Love the channel buddy, thanks!
@NicholasWilliams-uk9xu9 күн бұрын
Definitely the scale differential between the proton width and the Planck scale needs to be explained in a concrete way, where we discover all the physical distributions of action over that scale differential, to better develop comprehensive probabilistic calculations. Solving energy concerns and sustainable tech scaling, in order to lift countries like India out of poverty in a sustainable way, getting food to children and sustaining their healthy development. I see your point about lidar, pattern consistency, or depth mapping with have differential fluctuations (texturing) which can be statistically multiplied with prelabeled data of images to output a likelihood of what that texture correlates with, where the sample texture is tagged. But... there is a better algorithm that can dynamically learn on it's own, and learn to classify based on a parallel distribution of neuron acceleration and reward detection acceleration convergences, being a weight update factor. What is needed is real time autonomous multiplicative "burn in" of weights when a neuron output acceleration temporally converges with (core or intermediate) reward detection acceleration. Core reward detection is defined as the temporal proximity to a goal, something simple such as [temperature]. Intermediate reward measures are patterns that temporally accelerate with core reward detection acceleration, and they are multiplicatively burned into memory, and are a growing tree of reward measures that will further push and pull on the weights of neurons to achieve goal convergence (they are a list or tree that is attached to a core reward mechanisms, or multiple reward mechanisms synergistically, through multiplicative burn as well). The product of input pattern acceleration and core reward detection acceleration determines the additive or subtractive strength for remembering that pattern as a new reward measure. The product of neuron output acceleration or deceleration and the sum of reward measure acceleration tied to a core reward measure results in neuron weight adjustment. The appetite for a core reward and intermediate reward tree, which influences the shaping of neuron weights, is calculated as one minus the acquired resources tied to that core reward. Convergent pattern accelerations with core reward measure acceleration are multiplicatively burned in and become intermediate reward measures. These intermediate reward measures are used to multiplicatively burn in algorithmic behavior that accelerates with new pattern detection accelerations. When a node in the network accelerates over the initialization count prior to pattern detection acceleration, its weight is increased by the multiple of the reward detection acceleration amount for each node. The weight of a reward measure on changing node weights is multiplied by the appetite function, which increases the influence of these reward measures and is the inverse of acquired resources or one minus acquired resources. No backpropagation is needed, and the system learns on its own without prelabeled data or data theft by growing new reward measures. It stops optimization in a specific direction when appetite is low. This is far superior; the master algorithm has been exposed. It's important to note that pattern detections are reward measures and are multiplicatively burned into memory. These are memorized input patterns that feed the network and have been proven in the past to work, memorized through the process of temporal acceleration convergence with core goal detection mechanisms, which is multiplicative burn, they now will have weight on shaping neuron weights when those neuron outputs temporally accelerate with that pattern detection acceleration. Nodes in the network are a different system; they are tuned by temporal acceleration convergence with pattern detection acceleration on a node-per-node basis, based on their temporal acceleration convergence with the autonomously growing reward system. What you have are core reward measures, like something simple such as temperature, then intermediate pattern detection reward measures, and then network algorithmic behavior. These need to be aligned in terms of outcome, such as core reward detection maximization and resource acquisition, which are core drivers you instantiate. The appetite function increases the activity of these concerted systems based on inverse acquired resources. A system might have many core reward measures and appetites and be growing many intermediate reward measures for each of those, and algorithmic formation in the network portion will be guided by these systems. The multiplicative burn will slowly delete the pattern when negative acceleration occurs, mutating into a different pattern, ceasing to be an active reward measure, which is exactly what is needed for forgetting pathological behavior. There is a bit of secondary math for favoring a specific core reward, intermediate rewards, and model tied to a specific core reward based on which have the highest appetite or that are accelerating in temporal detection together. A one over appetite multiplier intermediates the appetites' magnitudes based on their activity. This can be trained so that if different core reward measures are synergistically accelerating together, they remain active together; otherwise, they cull each other. The highest appetite will be the one that has prominence in culling the others. Remember each reward mechanism will have an appetite multiplier, which is one minus (the sum of the other appetites times their trainable weights for culling that reward measure in question) when they are temporally active together (which culling weight is inverse to temporal convergent acceleration of each other = culling weight between each other). For synergistic culling of a reward measure's influence acting on node weight adjustments, the weight of a node is adjusted by adding the product of the reward measure, the appetite, and the appetite multiplier. The culling weight of an external reward measure is trained based on the proportion it is trained to be culled by another reward measure through synergistic multiplicative temporal convergence of detection acceleration. This is influenced by temporal reward detection acceleration-deceleration convergence, which decreases or increases the culling of a reward measure relative to other active reward measures, leading to automated segmentation of reward systems of differential objectives or paths. Basically a network of culling parameters that are trained through trial and error to narrow in on the right symbiotic strategy for amplifying resources acquisition per core reward. The system is not remembering patterns for the sake of remembering patterns, it's using them as reward temperature detection, that when accelerate will have influence on shaping neuron weights when those neuron temporally accelerate with that pattern, the pattern is a ever mutable feature that is not defined by user. Core rewards are different, they are simple comparisons like (temperature) or some floating point number you are trying to maximize. You also need a way for the neural algorithms outputs to feed changes to intermediate rewards, to further change those patterns to become better optimizers of neuron activity, so the networks being trained can tune the patterns, when it becomes complex enough to have a intelligent high level grasp on intermediate reward instantiation when it grows that inference capability (not core rewards, but rather the intermediate rewards). Giving the outputs of a neural network operators and functions to modulate those patterns into differential shapes to target it's own comparisons. As you can see, high level intelligence is easy too. It's all about (am I just fitting to pre-existing knowledge) or (am I inducting into new knowledge)? If you can be honest about these questions, and work from where work/time is being performed (convergent amplitudes that render acceleration) then you are in the right fundamental lane for inducting into new knowledge relative to goal maximization. Now once a neural network has sufficient inference capabilities, it can learn to make mutations on intermediate reward mechanisms (transformation on patterns through multiplying them together to narrow in on more important reduced convergent patterns, subtracting patterns to find differentials that allow it to be driven down or up gradients, adding them together to merge texturing, copy and paste them to be attached to other core rewards) allowing for dynamic procedural generation of intermediate reward drivers that further tune the network to cross stochastic hills that converge with core reward acceleration. High dimensional angular drive of neural network behavioral tuning, that converges on reward detection acceleration.
@svenhoek10 күн бұрын
Whatever hypothesis the AI conjures, the AI needs to also suggest experimental procedures to prove/disprove it.
@KyleKabasares_PhD10 күн бұрын
Good point.
@CswDoc80JL10 күн бұрын
“The agents”. Sounds very Matrixy.
@KyleKabasares_PhD10 күн бұрын
Doesn’t it?
@jeffwads10 күн бұрын
“Fascinating, captain”. Agreed Mr Spock.
@RickySupriyadi6 күн бұрын
I was using o1 like a chat model - but o1 is not a chat model. If o1 is not a chat model - what is it? I think of it like a “report generator.” If you give it enough context, and tell it what you want outputted, it’ll often nail the solution in one-shot. - Ben Hylak found that on internet, what about you kyle seems you have used o1 often, how was it your prompting exp with o1?
@noway823310 күн бұрын
Im very concern about the real impact in society , the impact of AI looks very disruptive to me , this is like when we use horses for thosands years and one day , Ford suceed with Cars , and thats was the end of the horsemans jobs.I understand that nobody can stop progress , but this AI , its not good for many many people, they are gone to be replaced
@ArifKhan-vc6zg9 күн бұрын
You dont look 30 more like 20.
@danielpaquin991310 күн бұрын
Would googles work on handwriting recognition work for lidar data categorization? kzbin.info/www/bejne/rafWdmugopZ6sKcsi=9jQy1sLlcnB-hEkU