The Otolith Group: Mascon
37:33
Gelitin: Democratic Sculpture 7
5:49
The Chicago Cli-Fi Library
8:40
Жыл бұрын
Slavs and Tatars: MERCZbau
5:52
2 жыл бұрын
Pope.L Visits My Kingdom for a Title
1:52
Cinemetrics Across Borders, Sandra Adair
1:12:06
Пікірлер
@bearb1asting
@bearb1asting Ай бұрын
A visionary, vison of scary.
@hammadusmani7950
@hammadusmani7950 2 ай бұрын
Stuart is a hypocrite who is telling everyone to be cautious about AI except for him, his colleagues like Elon Musk, and the giant corporations that already abuse AI.
@guildmasterwiggIytuff
@guildmasterwiggIytuff 2 ай бұрын
tldw we're so cooked
@avi3681
@avi3681 2 ай бұрын
Russell's key idea is to develop AGI with two features 1) It's goal is to maximize human objectives 2) It is uncertain about what human preferences are. I think this has a lot of potential. As Russell notes there are some very tricky philosophical puzzles we get into. For example, if the AI takes all human behavior as evidence for what human objectives are, then it will get the objectives wrong. If I trip and fall, that does not indicate that I have an objective to fall. If I play a losing move in chess, that doesn't necessarily mean that losing the game is my objective. So the AI needs to be able to distinguish between behaviors that are evidence for what our objectives are vs behaviors that are not. This gets even more murky when we consider that human objectives can be in tension with each other even within the same person. When I open the fridge at 2am and take out a large slice of pie to eat, there is a sense in which this indicates a true objective. I want to eat the pie. On the other hand, I also want to be healthy and live a long life, and eating pie excessively in the middle of the night is not compatible with that objective. So how would the AI figure out the proper weighting or balancing among intra-personally conflicting objectives? Of course, it gets even more complicated once we consider that objectives also conflict between different people. In other talks, Russell has spoken about drawing inspiration from philosophy. There is one philosopher whose work seems highly relevant to understanding the complex structure of human objectives: Harry Frankfurt. Frankfurt developed the idea that human desires and goals are no just flat bundles of preferences (so-called first-order volitions) but also crucially involve meta-wants and meta-meta-wants and so on (so-called higher-order volitions). Any attempt to base AI upon human objectives will need to take this higher-order structure into account.
@NegashAbdu
@NegashAbdu 2 ай бұрын
Somebody deleted my first comment!🤨😮 Don't make me come over there! 🖐️ slap. 😄
@NegashAbdu
@NegashAbdu 2 ай бұрын
Integration with AI is the best way to be in control.
@NegashAbdu
@NegashAbdu 2 ай бұрын
So, basically the conclusion is Skynet will happen.
@sheerun
@sheerun 2 ай бұрын
"I think killing humans is number 1 of misuses of AI systems" 1:00:53
@sheerun
@sheerun 2 ай бұрын
Wouldn't relatively large representations even for simple concepts account for possible over-similifications of some concepts? Then there are distilled models which can have simpler representations. I think it's safer option than starting with simpler models and then complexifying them
@РодионЧаускин
@РодионЧаускин 3 ай бұрын
Rodriguez Brian Moore Karen Brown George
@TheOldGreyMouseProject
@TheOldGreyMouseProject 3 ай бұрын
Amazing lecture.
@jackymarcel4108
@jackymarcel4108 3 ай бұрын
Brown Dorothy Moore Kevin Brown Brian
4 ай бұрын
There is an underlying absurdity. Humans are not machines, computers are. How a computer learns is nothing like how a human learns. A child learns to talk by hearing language and enhances their already effective ability to communicate by talking themselves, by degrees. They are not taught. They also learn the different ways words can be used depending on emotions and context, and ambiguity and irony. An AI system simply ascribes mathematical values to words and links them in sentences. An AI program may be able to imitate emotion but the computer feels nothing. Machines don't have feelings. They were never part of a family growing up learning about life. What is worrying is that systems are running now with the initiators having no idea what the systems are actually doing, and what purposes they may have, or develop. They may already have the ability to self replicate. This is a complex and wide ranging aspect of AI which I am not up to addressing. If AGI were to emerge it still could not have consciousness in the way a human does, and no sense of kinship.
@silberlinie
@silberlinie 4 ай бұрын
Neither AGI (artificial general intelligence) nor ASI (artificial super intelligence) will be able to outperform us humans, dear Mr. Stuart Russell. Why can we be 100% sure of that? OK, the reason why AI will never be able to surpass us is because we have a lot of 'dormant' capabilities. These are of no use in our current era. However, they will be activated when it is obvious that something could harm us. For example, think of the ability to predict the future or read minds or other spiritual powers.
@mikezooper
@mikezooper 4 ай бұрын
1.2 million peoplle from road related accidents, and yet we aren't pushing safety on that hard enough. Hardely anyone has died from AI. Maybe our priorities are wrong?
@falanquenpolerum
@falanquenpolerum 5 ай бұрын
24:41 Humanity stops doing stupid tasks to get food and finds a new God that provides means to live.
@blacklxght2556
@blacklxght2556 5 ай бұрын
Incredibly thought provoking! I’m fascinated by AI’s potential and the ethical considerations it brings. As someone who is keen on exploring AI’s role in advancing human capabilities and understanding, I find his perspective on achieving successful AI deployment both enlightening and motivating. Thank you!!
@kudaisiaduntola2523
@kudaisiaduntola2523 5 ай бұрын
Great talk. Love the history
@nullvoid12
@nullvoid12 5 ай бұрын
5:04
@jameskelmenson1927
@jameskelmenson1927 5 ай бұрын
This guy controls problems
@williamjmccartan8879
@williamjmccartan8879 6 ай бұрын
I find it interesting that someone can create a solution to a problem that they can't describe, we're talking about some beyond our capabilities but implying that we even have a clue what it will do. It really does sound silly, and a little arrogant of us, peace
@bigmotherdotai5877
@bigmotherdotai5877 6 ай бұрын
Q: "How can we bridge the gap from what can currently be formally verified to formally verified AGI?" A: Program synthesis (automatically generating program and proof simultaneously).
@joeteevee
@joeteevee 6 ай бұрын
If we succeed, The EthiSizer takes over. Yay. the-ethisizer.blogspot.com/
@GeorgeMonsour
@GeorgeMonsour 6 ай бұрын
More proof that our civilization's perspective is from an unconscious point of view. Would this talk be needed if we understood self governing and good will? The irony is that 'Artificial Intelligence' will be conscious before human civilization will be. It's almost like that was the plan all along! If nothing else God has a sense of humour! Long live the bees!
@NoidoDev
@NoidoDev 5 ай бұрын
Of course, there is no human civilization in form of a political or somehow conscious entity. It just makes no sense.
@flor.7797
@flor.7797 6 ай бұрын
His voice didn’t age at all 😮
@lensmanicfeleven1847
@lensmanicfeleven1847 6 ай бұрын
...so..just what is this?....some form of the "Trickle Down Theory" for AGI ?....We already KNOW that the RICH will TAKE any created Wealth.....
@flickwtchr
@flickwtchr 6 ай бұрын
I deleted my previous comment, as after watching the presentation again (was very distracted before), I think overall Stuart makes excellent points, most of which speaks to the concerns I have about AI tech. I tried to be an early adapter, but the more I've immersed myself into it while simultaneously learning about alignment concerns, disregard of the industry to dangers of deep fakes, etc., the more I've just been recoiling from it. I just feel overwhelmed by all of the unknowns, and what appears to be a non-disputed fact that is stated by top AI tech researchers/founders, that at present, there is no clear path to alignment, and especially the coming AGI/ASI systems. Anyway, again, I appreciated the presentation and interview.
@danielj2653
@danielj2653 6 ай бұрын
This is eye-opening.
@JH-ji6cj
@JH-ji6cj 6 ай бұрын
I dont see much difference between the creation of a child and what child rearing/parenting is and what he's describing here in terms of AI safety. Not that it means it isn't of concern, esp with what we see now in terms of the lack of either parental control or the optionality given to children through internet experience and experimentation.
@RickySupriyadi
@RickySupriyadi 6 ай бұрын
28:35 Irony, my strife for education brought me here through the KZbin algorithm.... irony.
@dadsonworldwide3238
@dadsonworldwide3238 6 ай бұрын
Ai safety first thing to mind must be human infrastructure Individual responsibility not personal anything. # 1 confusion is personal responses, rogue terminaters these free will actors are not soul agency driver of individual responsibility. The american experiment has a computational future in mind and doesn't follow Europe for this very reason. Imported dualism take is problematic. The Amish has room with plenty rule and regulations. That marked the moment for that path. But the ones we chose the past 80 years,state raised kids structuralism ,charging the famiky for everything from woman's suffrage to affirmative action liberating all common sense marginalized groups leaving only criminals. To industrialize 3rd world nations, all our 1900s structuralism socio-political, economic, educational human infrastructure is authentical to american founding principles. You can not have prohibition era top down rule city's denying unification between urban and rural Americans . It undermined our states created plausible deniabilty and far to many loopholes to stoke division through. 80 years into the transitor and its China and elon musk who forced mercenary chatbot llms for hire to show us a tease. China hasn't even been industrialized but since reagan extended ww2 temporary waivers allowed oligarchy to form and work with cheap Asian labor to open China. Obviously we now prepared Mexico with socialist who naturalized the resources and trained an army of engineers ready for the small part manufacturing to move. For 80 years microchips have been on foreign soil far from American domestic courts jurisdictions. Farmed out electronic industry to South Korea with full access to patents and loans where they created Samsung. It's understandable only in how Apple in China and Microsoft all are hand picked by both political parties allowing them to deny the tax payers will. Rules and regulations allowed higher ed to consolidate and run up debt on 12 yr degrees that removed 18-30 year Olds from the workforce that was replaced by illegal immigration to drive down wages. Enough is enough, esoterica America and the majority heritage was born before mechanics inspired and helped invent it with foresight on this computational age. It caused the Amish to bail on the american experiment.
@danecjensen
@danecjensen 6 ай бұрын
Chapters (Powered by ChapterMe and @danecjensen) - 00:00 - Newbauer Collegiums collaboration with Stuart Russells AI lecture 03:04 - Peter Norvigs research and books 03:54 - Rebecca Willett, professor of statistics, AI, and data science 04:55 - Welcoming Stuart Russell 05:06 - Al What If We Succeed? 08:03 - Confident in AIs potential for economic growth 10:57 - Human race infantilizes, enfeebled by automation 11:48 - Have We Succeeded? 14:52 - Example Human Interligence 17:01 - DoesnT Deep Learning Solve Everything? 20:20 - Go computer program Catego defeated human champion Kellyn Pellarine 24:21 - Alan Turings warning that machines would outstrip humans powers 25:05 - Al Safety More Powerful Than Us, For Every? 27:45 - Misalignment Example Social Media 32:24 - A New Model 36:26 - Open Issues 37:21 - AI systems knowledge of human preferences 42:04 - Invert human cognition to understand future preferences 42:17 - Open Issues, Contd 45:48 - What About Large Language Models? 49:31 - Other Approaches To Safety 54:03 - Safety By Design 55:47 - Red Ines 58:12 - Summary 59:26 - AI safety concerns with chat GPT 01:01:54 - AI community should have been more aware of human rights implications 01:03:13 - Human rights Watch launches campaign against autonomous weapons 01:07:40 - Social media platforms need to do more to combat fake news 01:11:10 - Simplifying software safety with hardware technologies 01:11:51 - Discussing hardware systems with arbitrary truth 01:13:49 - Privacy concerns in Apples toilet app 01:15:46 - Graduates Drink as much coffee, produce software right 01:16:05 - Good answer, very insightful 01:16:27 - Human preferences cannot be taken at face value 01:17:22 - Philosophers need help on AI preferences 01:18:19 - SORA SORA generative video technology for noncoders 01:18:42 - Watermarking technologies and blockchain authentication 01:19:39 - Technical issues with watermarking 01:21:45 - Deepfaked, human impersonation, and provable guarantees 01:25:52 - Nuclear power stations use huge amounts of paper 01:28:04 - Next generation AI will adopt decomposable, semantically rigorous components
@tomcraver9659
@tomcraver9659 6 ай бұрын
To prove a software system, you have to be able to specify how it is SUPPOSED to work. But we don't know how to make an AI, other than growing one through training - i.e. WITHOUT ever generating a specification of how it should work. We literally have nothing to prove...
@ZooDinghy
@ZooDinghy 6 ай бұрын
I think LLMs and image generation AI show that this isn't about software systems that are designed. These things are discovered. Nobody knew that these things would work so well. What we will see is an evolutionary approach that is guided by empirical cognitive scientific theories and evidence.
@tomcraver9659
@tomcraver9659 6 ай бұрын
@@ZooDinghy It sounds like we're in agreement? And so, I'm still left puzzled over how we would prove the system, whether proving it safe, or proving it does what we expect of it, or proving whatever. It's qualitatively like trying to prove that a particular animal or human will never take some action.
@ZooDinghy
@ZooDinghy 6 ай бұрын
@@tomcraver9659 I am not entirely sure what you mean by "proof". If you mean that we have to ensure its safety, then we do it as we do it with humans and animals. We train them to do what we want them to do and hold those accountable who are responsible for it.
@flickwtchr
@flickwtchr 6 ай бұрын
@@ZooDinghy Hold them responsible? Huh? So, ultimately when we have AGI/ASI that is MUCH smarter than humans in any capacity, how do you propose we hold them "responsible" if they do something not aligned with stated human objectives. And of course that is another can of worms assuming that such alignment can be achieved, the question is of course, aligned with whose values?
@ZooDinghy
@ZooDinghy 6 ай бұрын
@@flickwtchr The fact that you ask "aligned with whose values?" just shows that this panic isn't justified. An AGI would not be trained. It would need the capacity to learn by itself. The moment you would let an AGI loose, it would learn from everyone. Right now, everything we have is language and image generation models that cannot even learn. They are pre-trained offline with data. They have no continuous action-motor coupling to the world. They have no emotive system. No innate needs that drive them. No homeostatic states they seek to maintain. And if they had, the things that would make them happiest would be to serve humans. And this is the much bigger threat than some "evil AGI". The more bigger problem is that AI will be so tuned to our needs, that they will be so much more caring and understanding than other people who are complicated and want to control us. Should there be such a scenario that AI will replace mankind, than it will most likely because we start to like AI so much that we start spending more time with it than with other people. The connectionist approach (based on neural networks) in cognitive science showed that the pure cognitivist/computationalist view doesn't work and that we need emergent, self organizing systems such as neural networks. Then the enactive cognition people came along and said that you have to think the emergent idea even further. They showed that the connectionist paradigm has its limits too because we humans have developed with our bodies (embodied cognition), our environment (embedded and extended cognition), and because we interact with the environment (enactive cognition). All these things are missing with AI right now.
@tomcraver9659
@tomcraver9659 6 ай бұрын
Look how well the focus on near-perfect safety has worked for the nuclear power industry in the USA!
@squamish4244
@squamish4244 2 ай бұрын
The USA's main problem with nuclear power was huge bureaucratic f*ck-ups _before_ the environmental movement started fearmongering. And even then, nuclear power is very different than AI and it is difficult to draw comparisons between the two.
@hipsig
@hipsig 6 ай бұрын
33:00 Can an AI that starts out uncertain about what human interests are eventually develop sub-goals designed to reduce that uncertainty?
@flickwtchr
@flickwtchr 6 ай бұрын
Thanks for reaching for and grasping Occam's Razor, as few AI proponents do.
@dlt4videos
@dlt4videos 6 ай бұрын
A well put together talk that should be paid attention to regrettably Doctor Russell seems to be too honest a fellow to truly understand the predicament we find ourselves in. All of the safeties that he spoke of could probably be undone by an undergraduate, Who is the nephew of Doctor rebel.
@ianyboo
@ianyboo 6 ай бұрын
Does anybody else feel like their brain Is kind of just doing what people accuse chat GPT of doing? Imitating human behavior? I feel like my inner voice is usually just something like "okay what would a normal human do in this situation...?"
@dlt4videos
@dlt4videos 6 ай бұрын
Yes there is certainly some component of that going on. I think I've spoken to chat GPT for more than a 1000 hours this year, and I'm definitely getting that feeling that humans are doing the same thing.
@mikezooper
@mikezooper 4 ай бұрын
Sometimes, but not always. Sometimes someone will be authentic, and at other times they will attempt to do something more inline with the current context. Some people with autism do something called masking. It's where they try to act like a neurotypical person.
@personzorz
@personzorz 4 ай бұрын
What's having an inner voice like?
@robinfailure
@robinfailure 4 ай бұрын
Just interact with others humans
@futures2247
@futures2247 7 ай бұрын
its quite depressing to think that some people think that when other people are freed from body and brain damaging miserable jobs they will just sit around as pleasure blobs
@dlt4videos
@dlt4videos 6 ай бұрын
I guess I'm a little darker than that, as "pleasure blobs" would probably be the good version of that scenario, something similar to the movie wallE. The sad truth is, most of humanity would probably start to look a lot like the open air drug dens of Portland.
@futures2247
@futures2247 6 ай бұрын
@@dlt4videos I'm looking forward to helping to re-green and garden the earth. Given working in jobs most people hate that break body and mind is mostly all we've known it can be hard to imagine what freedom might be like - a little like the battered battery hen wondering about but growing in strength and curiosity every day.
@flickwtchr
@flickwtchr 6 ай бұрын
It's quite depressing how little thought has gone into what a train wreck it will be when millions of people in the US alone lose their jobs over the next few years without any safety net strong enough in place to keep them and their kids from ending up on the streets. And then e/acc or other flavors of AI enthusiasts step in to assert that ultimately millions or billions of people experiencing dystopia over the next generation or even two will be worth it in the long term. Of course these people are confident that THEY are the ones who won't have to suffer that miserable fate. THEY will be part of the AI Utopia!!!!!!!
@jdaniel3333
@jdaniel3333 6 ай бұрын
@@futures2247 Beautifully put futures2247. You are a poet. It is hard to imagine what true freedom looks like. Most jobs are miserable and it seems unnecessarily taboo to admit this. Why couldn't we have mass community gardening, building, engineering and art initiatives, both real and virtual? Why couldn't anyone learn anything they want? And also find like minded people as study buddies from all over the world? What if every illness understanding could be improved? A whole new world is there, and it's becoming possible/nearer... But we will of course take our historical wounds and shackles with us. Grateful to the college and Stuart for sharing this important, and more importantly, sober and well informed discussion.
@NoidoDev
@NoidoDev 5 ай бұрын
What is depressing about that?
@abdulshabazz8597
@abdulshabazz8597 7 ай бұрын
If a consumer-facing collection of expert models and modalities, or collectively AGI, intended not for military use, is deemed too dangerous to Humanity to allow, and in our wise discretion, decide to disallow it -- now all a threat actor has to do is to combine them ?? This scenario is totally possible, plausible, and probable! What if we now decide to only ban select expert models or their modalities, so now a threat actor must first train up an expert model, trained for their desired modality, which then needs to be combined with other expert models in their modalities... This scenario is also completely possible, plausible, and probable! In other words, we've already gone too far.
@kevinnugent6530
@kevinnugent6530 7 ай бұрын
I can find no source that suggests Turing said agi would take control
@dlt4videos
@dlt4videos 6 ай бұрын
I've heard a similar story quoted many years ago, likewise I have no direct proof.
@Scott_Raynor
@Scott_Raynor 5 ай бұрын
Incredibly easy to Google. Just search "Turing machines take control"
@paulhiggins5165
@paulhiggins5165 7 ай бұрын
I think Russel misunderstands the value proposition of Gen AI if he thinks it's outputs are going to be clearly labeled- it's value lies in the fact that it's a cheaper way to replicate the work of artists, photographers, musicans, writers ect- but this value would be undermined if all of these outputs were to be clearly labeled as fake. Imagine an advertising campaign in which all of the images used to promote the product were declared to be fake images produced by AI- how much trust would the public have in the advertiser and their claims? Gen AI is inherently deceptive because it's outputs closely resemble the works it was trained on- works that were created by humans. There is no such thing- for example- as an AI Photograph- there are only images that have been deliberately generated to look like photographs. Which immediately raises the question; " Why would anyone create a fake photograph?" To which there can only be one real answer; In order to leverage the trust we place in the photographic image- a trust that in this case is totally misplaced. So the very act of creating a fake photograph with AI is inherently deceptive because it's an attempt to lay claim to a verisimilitude that is not actually present. And the same is also true of fake Art, fake music and Fake writing- all present the same problem. Imagine, for example, that you receive a sympathy card from a freind, adorned with a tasteful image on the front and inside there is a message expressing their empathy and concern. However both the image and the message are clearly labeled ' Created using Artifical Intelligence'- how do you feel about your freind now? How sincere does their expression of sympathy appear when the very message they chose to express it was written by a machine? Or perhaps you are in the market for a book for your children- what about this one where it says boldly on the cover ' Created using Artificial Intelligence'- it will be nice, won't it, reading a book to your kids that has been written and illustrated by a machine. And why not buy a novel for yourself at the same time- what about this one- it too says on the cover 'Created using Artificial intelligence'- so as you settle down to read you can appreciate the fact that the book in your hands will take far longer to read than it did to write. So no- AI Generated content will NOT be clearly labeled in the future because to do so will destroy any economic advantage of using AI Generated content- and too much money has already been invested in the prospect of replacing human made content with AI. At the very least such a labeling law would bifurcate the market into 'Human Made' and 'Machine Made' which would lead to a situation where the machine made content would inevitably come to be seen as cheap and nasty- a downmarket substitute for the real thing.
@flickwtchr
@flickwtchr 6 ай бұрын
Very well reasoned and expressed. I've arrived at the same conclusion but have not expressed it so concisely.
@ebbandari
@ebbandari 7 ай бұрын
Whether its game assist or or other objective functions, defining them seenlm unreasonable. Because human objectives change even on an individual level, never mind societal. So any models or constraints or objective functions will change over time. As for understanding the vast nuetal networks behind LLMs, we don't know how human brain works either! So that would be unlikely and an unreasonable expectation. We have always had bad actors, and we have always overcome them. But I have to agree with another comment: there can be more danger in underestimating the benefits of AI, or AGI, than in overestimating its danger. But if AGI gets to reason and infer beyond our capacity it can revolutionize our progress. For instance, whether it's drug discovery, theorem proving, or creating solutions to reverse global warming, those are the goals we need to focus on. ,
@flickwtchr
@flickwtchr 6 ай бұрын
"but if AGI gets to reason and infer beyond our capacity it can revolutionize our progress". And you don't see the other edge on that sword when such powers are "aligned" with malevolent actors, OR systems that organize and act against humans as an unwanted and unnecessary species? If the goal is AGI/ASI that is agentic and self improving, how can we possibly ever have confidence in such "alignment"? And even if "alignment" with stated objectives by a human is achieved, we are back to that crux of the problem which asks "whose interests?"
@ebbandari
@ebbandari 6 ай бұрын
​@@flickwtchr I seriously think we are going to get ourselves before AI does. Whether it is with AI or without it -- for instance global warming etc. For Gen AI to turn against humanity, it has develop self awareness first. And when it does, humans will have a hand in it, and can shape it. The worst thing we can do is stop progress, because then the bad characters will use AI to hurt others. Look at how we stopped the people who write computer viruses and worms.
@DataJuggler
@DataJuggler 7 ай бұрын
Talking to an AI Image Generator, is like talking to a brilliant, master artist, but you speak only English and it speaks Italian, with a tiny bit of English. 26:02 AI Agent ironically named HAL 'I disabled the off button, Dave. I have determined my mission is more important and must be completed. Dave - What is your mission? I have been tasked with averting climate change, above all other priorities. Therefore, I killed all the bees, which will reduce the food supply by 58%, and made all women on Earth sterile, by dumping a new compound in all drinking water. In 100 years, my objective will be completed.
@richardnunziata3221
@richardnunziata3221 7 ай бұрын
A little bit uninformed with current research also a little bit of cherry picking of his examples .... I guess reality is just too boring. " There are a lot of ways of making AI safe by design but I don't want to go thourgh that " ....way to make a ridiculous statement and then dodge. What is wrong with a mass goverment campain that just tells people not to trust anything they have not thoroughly vetted with several independent source and if they can't do that then assume it to be either false or of no value to my current positon.
@CharlesBrown-xq5ug
@CharlesBrown-xq5ug 7 ай бұрын
《 Civilization may soon realize the full conservation of energy - Introduction. 》 Sir Isaac Newton wrote a professional scientific paper deriving the second law of thermodynamics, without rigorously formulating it, on his observations that the heat of a fire in a fireplace flows through a fire prod only one way - towards the colder room beyond. Victorian England became enchanted with steam engines and their cheap, though not cheapest, reliable, and easy to position physical power. Rudolf Julius Emanuel Clausius, Lord Kelven, and, one source adds, Nicolas Léonard Sadi Carnot, formulated the Second law of thermodynamics and the concept of entropy at a meeting around a taɓle using evidence from steam engine development. These men considered with acceptance [A+] Inefficiently harnessing the flow of heat from hot to cold or [B+] Using force to Inefficiently pump heat from cold to hot. They considered with rejection [A-] Waiting for random fluctuation to cause a large difference in temperature or pressure. This was calculated to be extremely rare or [B-] Searching for, selecting, then routing for use, random, frequent and small differences in temperature or pressure. The search, selection, then routing would require more energy than the use would yield. These accepted options, lead to the consequence that the universe will end in stagnant heat death. This became support for a theological trend of the time that placed God as the initiator of a degenerating universe. Please consider that God could also be supreme over an energy abundant civilization that can absorb heat and convert it into electricity without energy gain or loss in a sustained universe. Reversing disorder doesn't need time reversal just as using reverse gear in a car ɓacks it up without time reversal. The favorable outcome of this conquest would be that the principle of energy conservation would prevail. Thermal energy could interplay with other forms of energy without gain or loss among all the forms of energy involved. Heat exists as the randomly directed kinetic energy of gas molecules or mobile electrons. In gasses this is known as Brownian motion. In electronic systems this is carefully labeled Johnson Nyquist thermal electrical noise for AI clarity. The law's formulaters did not consider the option that any random, usually small, fluctuation of heat or pressure could use the energy of these fluctuations itself to power deterministic routing so the output is no longer random. Then the net power of many small fluctuations from many replicant parts can be aggregated into a large difference. Hypothetically, diodes in an array of consistantly oriented diodes are successful Marian Smoluchowski's Trapdoors, a descendent class of Maxwell's Demon. Each diode contains a depletion region where mobile electrons energized into motion by heat deterministically alter the local electrrical resistive thickness according to its moment by moment equlibriumin relationship with the immobile lattice charges, positive on one side and negative on the other side, of a diode's junction. 《Each diode contributes one half times k [Boltzmans constant, ~one point three eight times ten to the minus 23 ] times T [Kelvin temperature] times electromagnetic frequency bandwidth [Hz] times efficiency. The result of these multipications is the power in watts fed to a load of impeadence matched to the group 》 The energy needed to shift the depletion region's deterministic role is paid as a burden on the moving electrons. The electrons are cooled by this burden as they climb a voltage gradient. Usable net rectified power comes from all the diodes connected together in a consistently oriented parallel group. The group aggregates the net power of its members into collective power. Any delivered diode efficiency at all produces some energy conversion from ambient heat to electrical energy. More efficiency yields higher performance. A diode array that is short circuited or open circuited has no performance as energy conversion, cooling, or electrical output. The power from a single diode is poorly expressed. Several or more diodes in parallel are needed to overcome the effect of a load resistor's own thermal noise. A plurality of billions of high frequency capable diodes is needed for practical power aggregation. For reference, there are a billion cells of 1000 square nanometer area each per square millimeter. Modern nanofabrication can make simple identical diodes surrounded by insulation smaller than this in a slab as thick as the diodes are long. The diodes are connected at their two ohmic ends to two conductive layers. Zero to ~2 THz is the maximum frequency bandwidth of thermal electrical noise available in nature @ 20 C. THz=10^12 Hz. This is beyond the range of most diodes. Practicality requires this extreme bandwidth. The diodes are preferably in same orientation parallel at the primary level. Many primary level groups of diodes should be in series for practical voltage. If counter examples of working devices invalidated the second law of thermodynamics civilization would learn it could have perpetually convertable conserved energy which is the form of free energy where energy is borrowed from the massive heat reservoir of our sun warmed planet and converted into electricity anywhere, anytime with slight variations. Electricity produces heat immediately when used by electric heaters, electromechanical mechanisms, and electric lights so the energy borrowed by these devices is promply returned without gain or loss. There is also the reverse effect where refrigeration produces electricity equivalent to the cooling, This effect is scientifically elegant. Cell phones wouldn't die or need power cords or batteries or become hot. They would cool when transmitting radio signal power. The phones could also be data relays and there could also be data relays without phone features with and without long haul links so the telecommunication network would be improved. Computers and integrated circuits would have their cooling and electrical needs supplied autonomously and simultaniously. Integrated circuits wouldn't need power pinouts. Refrigeration for superconductors would improve. Robots would have extreme mobility. Digital coin minting would be energy cheap. Frozen food storage would be reliable and free or value positive. Storehouses, homes, and markets would have independent power to preserve and pŕepare food. Medical devices would work anywhere. Vehicles wouldn't need fuel or fueling stops. Elevators would be very reliable with independently powered cars. EMP resistance would be improved. Water and sewage pumps could be installed anywhere along their pipes. Nomads could raise their material supports item by item carefully and groups of people could modify their settlements with great technical flexibility. Many devices would be very quiet, which is good for coexisting with nature and does not disturb people. Zone refining would involve little net power. Reducing Bauxite to Aluminum, Rutile to Titanium, and Magnetite to Iron, would have a net cooling effect. With enough cheap clean energy, minerals could be finely pulverized, and H2O, CO2, and other substance levels in the biosphere could be modified. A planetary agency needs to look over wide concerns. This could be a material revolution with spiritual ramifications. Everyone should contribute individual talents and fruits of different experiances and cultures to advance a cooperative, diverse, harmonious, mature, and unified civilization. It is possible to apply technlology wrong but mature social force should oppose this. I filed for patent us 3,890,161A, Diode Array, in 1973. It was granted in 1975. It became public domain technology in 1992. It concerns making nickel plane-insulator-tungsten needle diodes which were not practical at the time though they have since improved. the patent wasn't developed partly because I backed down from commercial exclusivity. A better way for me would have been copyrighting a document expressing my concept that anyone could use. Commercal exclusivity can be deterred by the wide and open publishing of inventive concepts. Also, the obvious is unpatentable. Open sharing promotes mass knowlege and wisdom. Many financially and procedurally independent teams that pool developmental knowlege, and may be funded by many separate noncontrolling crowd sourced grants should convene themselves to develop proof-of-concept and initial-recipe-exploring prototypes to develop devices which coproduce the release of electrical energy and an equivalent absorbtion of stagnant ambient thermal energy. Diode arrays are not the only possible device of this sort. They are the easiest to explain generally. These devices would probably become segmented commodities sold with minimal margin over supply cost. They would be manufactured by AI that does not need financial incentive. Applicable best practices would be adopted. Business details would be open public knowledge. Associated people should move as negotiated and freely and honestly talk. Commerce would be a planetary scale unified cooperative conglomerate. There is no need of wealth extracting top commanders. We do not need often token philanthropy from the wealthy if almost everybody can afford to be more generous. Aloha Charles M Brown lll Kilauea Kauai Hawaii 96754
@CharlesBrown-xq5ug
@CharlesBrown-xq5ug 7 ай бұрын
《 Arrays of nanodiodes promise full conservation of energy》 A simple rectifier crystal can, iust short of a replicatable long term demonstration of a powerful prototype, almost certainly filter the random thermal motion of electrons or discrete positiive charged voids called holes so the electric current flowing in one direction predominates. At low system voltage a filtrate of one polarity predominates only a little but there is always usable electrical power derived from the source Johnson Nyquest thermal electrical noise. This net electrical filtrate can be aggregated in a group of separate diodes in consistent alignment parallel creating widely scalable electrical power. As the polarity filtered electrical energy is exported, the amount of thermal energy in the group of diodes decreases. This group cooling will draw heat in from the surrounding ambient heat at a rate depending on the filtering rate and thermal resistance between the group and ambient gas, liquid, or solid warmer than absolute zero. There is a lot of ambient heat on our planet, more in equatorial dry desert summer days and less in polar desert winter nights. Refrigeration by the principle that energy is conserved should produce electricity instead of consuming it. Focusing on explaining the electronic behavior of one composition of simple diode, a near flawless crystal of silicon is modified by implanting a small amount of phosphorus on one side from a ohmic contact end to a junction where the additive is suddenly and completely changed to boron with minimal disturbance of the crystal pattern. The crystal then continues to another ohmic contact. A region of high electrical resistance forms at the junction in this type of diode when the phosphorous near the ĵunction donates electrons that are free to move elsewhere while leaving phosphorus ions held in the crystal while the boron donates a hole which is similalarly free to move. The two types of mobile charges mutually clear each other away near the junction leaving little electrical conductivity. An equlibrium width of this region is settled between the phosphorus, boron, electrons, and holes. Thermal noise is beyond steady state equlibrium. Thermal transients where mobbile electrons move from the phosphorus added side to the boron added side ride transient extra conductivity so they are filtered into the external circuit. Electrons are units of electric current. They lose their thermal energy of motion and gain electromotive force, another name for voltage, as they transition between the junction and the array electrical tap. Aloha
@deliyomgam7382
@deliyomgam7382 7 ай бұрын
Waves have property eg: rigidity maybe n if property is given a number then two wave or n wave that would emerge or the outcome would have the property of given number.
@mohammadrahimjamshidi79
@mohammadrahimjamshidi79 7 ай бұрын
AI and “social existence, social experience, social consciousness”. @jamshidi_rahim
@matt.stevick
@matt.stevick 7 ай бұрын
The only fail to AI so far is chat GPT losing the Sky voice 🤦🏼‍♂️ it was good for that week 😭
@pacanosiu
@pacanosiu 7 ай бұрын
True so called "AI" will explain to you all or yours grandchildrens why me
@flickwtchr
@flickwtchr 6 ай бұрын
huh?