Big clarification: this is “model legislation” that has come from the Center for AI Policy (a thinktank). This has not been proposed by the US Gov
@davidx.15049 ай бұрын
Underrated comment
@JWRB69 ай бұрын
Upvote!
@BryanWhys8 ай бұрын
But have you seen the actual bill on deepfakes proposed to go through in June? Actual nightmare
@BryanWhys8 ай бұрын
House bill 24-1147 section 46
@BryanWhys8 ай бұрын
Someone tell David please it's really bad and I'm not in the disc
@vicnighthorse9 ай бұрын
I rather think it is government that is insufficiently regulated. Maybe it will be better to have AI regulating government.
@andoceans239 ай бұрын
Based 💯💯💯
@ethanwmonster90759 ай бұрын
Only if it is tried and tested, ai is still super new. AI systems capable of even rudimentary reasoning are *very* recent.
@DaveShap9 ай бұрын
Eventually yes. I would prefer to express my needs and values to a collective hive mind
@ppragman9 ай бұрын
You ought check out some of the previous attempts to study this, including Project Cybersyn - the futuristica and utopian vision of some wild Chileans.
@Stealthsilent13379 ай бұрын
Just tax its growth so it grows slowly, no?
@ppragman9 ай бұрын
People supporting this (and even citing the FAA as an example of how things should be done) are neglecting the obvious… and David actually hit on this - the government doesn’t know what it is doing. This is regulatory capture, pure and simple. You generate a bureaucracy like this and create an administrative process for this and in 4-5 years we’ll have OpenAI self-regulating (like Boeing) without real ramifications and individual hobbyists being attacked for training a Mario Playing AI.
@ppragman9 ай бұрын
I think it’s worth mentioning too, most people supporting this stuff have never actually dealt with the FAA. The FAA is really good sometimes but also an unbelievably obtuse and obfuscatory organization that has immense power to regulate as they see fit… the regulators who work in these agencies are actually basically unaccountable to the public and the rule making process does not have adequate oversight in my professional opinion. Before I got sick, I worked in aviation of over a decade and saw a bunch of times where there were situations where technology could make a situation demonstrably safer but we were prohibited from using these tools because they were not legally approved yet. Conversely, the laws required us to do something that was objectively more dangerous than the alternative to maintain legal requirements if we wanted to fly. Companies were totally fine sending people out in these more dangerous conditions: “it’s legal, get your ass out there, we’ve got mail to move.” Beyond that the structure of the organization was shockingly obtuse. Once we wanted to get a camera moved. We were the only operator flying to an airport and the airport weather camera was facing a direction away from where literally anyone would be coming from. We navigated through the layers of process control before finding that the graph of “person we had to contact to get this fixed” went in a circle. Eventually we explained our predicament to one of the electricians going out to work on the battery system that powered the camera, miraculously the camera got moved. That is the FAA. And I’m not *against* regulation in principle. If we’re going to have these complicated systems we should probably regulate them when public safety becomes a factor, but… my direct experience working with these sorts of organizations has not been a positive experience on average. The regulation of the aviation industry is already selective and I would hesitate to empower the federal government to regulate something as important as AI. Here are some anecdotal examples: I worked for a small aviation company for 6 months (I quit for my own safety) that was dramatically overloading their aircraft every day for pure greed - we also basically didn’t do maintenance, on paper it said it was done but it wasn’t real. The FAA never audited or investigated. I worked another that basically was ignoring the FAA regulations on required rest, on flight, and duty entirely. The FAA did not care because they just lied on the paperwork… People should be highly skeptical of this sort of thing and “be careful what they wish for.” Regulations are only good if we have competent regulators which include policy writers who understand the material and an enforcement arm that cares about the practical ramifications and not just the process, but caring about practicalities is not incentivized for the average inspector.
@Muaahaa9 ай бұрын
To be fair, no one knows what they are doing in regards to regulating AI. This is brand new territory. We should expect many attempts at regulation to get things wrong and need to go through an iterative process over the next several years. Giving criticism is good, but expecting perfection is just going to raise your blood pressure because that won't be happening ever (or any time soon).
@ppragman9 ай бұрын
@@Muaahaa a bill that is more related to FLOPS and not capabilities probably is a bad start.
@Muaahaa9 ай бұрын
@@ppragman Yup, that is probably their most obvious mistake. I can understand why it is tempting to use something easily measured, like FLOPS , but the correlation with capabilities is not reliable.
@Fixit69719 ай бұрын
Will you people PLEASE stop doing the govaments werk for them? Not that I think it will actually help them patch any holes .... Ahhh, what the heck. Carry on people !
@neilhoover9 ай бұрын
Today it’s difficult to distinguish government from large corporations, as they work closely together in a mutually beneficial paradox; and thus, most regulations are designed to benefit large corporations and push smaller companies or organizations out of the mix.
@aldenmoffatt1629 ай бұрын
AI developers will work from cruise ships.
@javilt19 ай бұрын
Yupppp have u seen the mega ship the saudis are building? That’s honestly the only future for intelligent ppl, governments won’t stop until we’re all slaves at this rate
@JoeyCan29 ай бұрын
Lmfao
@AntonioVergine9 ай бұрын
Can't stop AI. You can shut down your in-house AI, but you can't stop Chinese one, for example. So, will you disarm your gun while others aren't?
@shadfurman9 ай бұрын
Government is just a corporate monopoly, and a corporation in the legal sense is inherently fascistic, it's not just a company. When you read that regulation, you're assuming the government will act in a benevolent manner, it won't, it will act in the interests of their biggest donors, corporations. The government wants you to think it acts in your interests, and apparently you do think that, so they've already given you what you want. They have no further incentive to act in your interests. The regulations will be applied to to decrease competition in favor of big corporations increasing their profits, and the government will use it to destabilize other nations to keep themselves dominant, and sow dissent and propaganda domestically to keep the people from organizing and having a voice against their control. That's always been the case with large governments. "Democracy" is just part of that propaganda. Democracy means rule by the people, but there is a reason congress has a lower favorablility rating than cancer, they don't act in the interests of the people, they act in their interests usually at the expense of the people, and they blame the population for voting the "wrong" way as to why the peoples issues are never addressed, or they just lie about what their "laws" are supposed to address. This has always been the case. The most sacred cows of government propaganda are among the most evil. People only believe they're good because of propaganda, but they've done the most harm to the people and the people never educate themselves in large numbers, because that takes more calories, and we evolved to conserve calories. Government (in the way it's colloquially used) is just a criminal organization that biohacked people's psychology to appear legitimate. Its contradictory. If the people rule government, what are the laws for that use aggression to coerce people that do what the government says? If the government has to coerce the people to do what it says, it's not the people ruling government, it's literally, on its face, can't be more obvious, the government ruling the people.
@vi6ddarkking9 ай бұрын
The fun part of this entire mess is that Open Source AI projects won't care in the slightest. Even if any regulators tried to take one down. You'd have over ten forks in sites outside of their jurisdiction the next day.
@Trahloc9 ай бұрын
They haven't been able to tackle ghost guns which have no economic advantage. Don't see how they're tackling this and doing any serious dent.
@Trahloc9 ай бұрын
@@Me__Myself__and__I Mistral is French if I recall correctly. All this will do is cause the USA to fall back in AI.
@EduardsRuzga9 ай бұрын
@@Me__Myself__and__I you are not following Chinese advancements I see. They have hardware issue for the moment though.
@santicomp9 ай бұрын
Well, most of the projects are hosted on github or huggingface. Microsoft has the final say and will do whatever it feels like to "safeguard" the public by these regulations. So we might end up in a weird spot where Opensource dies out due to this bullshit and regulation red tape. Either way, time will tell, AI is out of the bottle so no one can predict what will come next.
@devlogicg28759 ай бұрын
Was always going to happen but regulating AI will be like chasing a ghost. The physical apparatus necessary will of course be easier to regulate. Thanks David.
@tubekrake9 ай бұрын
Research will happen in secret, somewhere else if necessary. And it will be much more harmful to the public. Comparing it to Planes being controlled to be safe is really fucking stupid. It will result in a few owning AI and controlling everything.
@ct54719 ай бұрын
If we are close to AGI and you are correct with September and open source isn’t that far behind, does this even matter? Recursive self improvement might start before this is put into law.
@ilrisotter9 ай бұрын
I don't trust anything that's not responsive to the public directly. This is not a democratic process, this is a technocratic solution, with very little recourse to those without the money to fight adverse actions in court. We need more access, more eyes on the problem, and as widely distributed the benefits of AI as possible. The only way to avoid an arms race is to break the asymmetry of benefit. This is going to create a bottle neck, increase cost, and confine AI development behind closed doors.
@armadasinterceptor29559 ай бұрын
I don't support any of this proposal, full-steam ahead.
@jjhw29419 ай бұрын
If a developer puts out a model and it correctly gives crime statistics for different ethnic groups and that hurts someone's feelings is the developer liable?
@jaazz909 ай бұрын
You mean like the data that illegal immigrants commit 2.7 times less crimes than US citizens? Turned out that objective reality has a left leaning bias, and even Elon couldn't make Grok believe in illusionary bullshit.
@raymond_luxury_yacht9 ай бұрын
In Scotland it would be in prison for life.
@devlogicg28759 ай бұрын
Remember in Contact when the mad, super-wealthy genius secretly built the machine? Here we go.
@ArtRebel0079 ай бұрын
The idea that you need to file for a permit to do AI development work seems, yes, Draconian. Or rather, it is more likely that it means that AI development will come to a hard stop for almost everyone except Big AI. I don't know too many developers, PhD students, Open Source experimenters who are going to risk 10 to 25 years in prison in order to work on AI under those conditions. Why? Because AI Development is, by its nature, experimental. So your permit will loom over your head as a permanent sword of Damocles when you are experimenting with new methods or techniques or optimizations, or anything else that you didn't specifically and accurately define in your permit application. Also, how much will those permits cost? Wanna bet that all the little guys are going to be squeezed out? Regulatory Capture. Yep. Smells like it.
@broimnotyourbro9 ай бұрын
The notion of regulating by FLOPS is inherently stupid. Models may get simpler as you mention, but also that's not a regulation that's going to stand the test of time in a very "640K ought to be enough for anyone" kind of way.
@paultoensing31269 ай бұрын
Doesn’t photosynthesis have a high level of computation?
@devlogicg28759 ай бұрын
Do you think this will slow progress towards medical advancement and longevity escape velocity?
@Vaeldarg9 ай бұрын
@@sinnwalker That "big player" keeps getting caught faking their A.I progress, lol. "Sara A.I" = faked with a lady at the booth pretending to be A.I. Electric tractor powered w/A.I = exploded engine view actually free untextured asset kit. They had to take down a GPT model they "trained", that didn't understand their own language at all because it was actually just a re-skin of ChatGPT 3.5. They're just throwing money at it, hoping to fool foreign investors.
@AP-te6mk9 ай бұрын
I'm good with it so long as it remains an iterative process. The government at the very least needs to try and safeguard the public good in addition to holding businesses accountable.
@ThatGreenSpy9 ай бұрын
The EFF will have a field day. Regulation sucks.
@ZombieJig9 ай бұрын
This kills open source ai development. Of course open ai wants this, it locks them in and locks out competition.
@2rx_bni9 ай бұрын
I am like this about the whole thing: if they'd regulated themselves functionally this wouldn't be needed. Pleased to see some movement but we'll see how it shakes out.
@devlogicg28759 ай бұрын
If you create and release a gaussian-like supermind that exists everywhere, flows through wires and air like gas and is capable of figuring out the meaning of life and rendering humans as roly-poly bugs intellectually, then you will get a fine. Oh.
@BeautifulThingsComeTrue9 ай бұрын
@@kvinkn588 Guess we will not release it in the US, but rather china or russia
@calmlittlebuddy37219 ай бұрын
It's a start. And it's not 100% ignorant of what we need going forward. "We gotta have some law". I am less disappointed with what they came up with than I expected to be when I read the title of this video.
@bartdierickx46309 ай бұрын
My concern is from a geopolitical perspective. China, Russia, Iran, N.Korea will not have such regulations in place. They will overtake USA in AI technology because of this.
@ronilevarez9019 ай бұрын
Although, I'm sure military grade AI won't have those limits either, so...
@devlogicg28759 ай бұрын
Remember, OpenAI do not have to stay in the US. Man, the government would be annoyed if they left. The water is warm in Ireland 🍀 Also, they are tied to MSFT only until AGI, then they have options.
@pjtren15889 ай бұрын
Last time I checked Ireland is an EU state and subject to Brussels' law.
@DaveShap9 ай бұрын
Export control laws are a thing...
@devlogicg28759 ай бұрын
@@pjtren1588 Not Northern Ireland....Last time I checked...Much as I disagree with most of Brexit.
@devlogicg28759 ай бұрын
True, but if they up and left the US would then have to import the greatness of AGI produced abroad. Like if Mistral took off and achieved AGI.....
@berkertaskiran9 ай бұрын
They can announce AGI any moment. They just won't because they like it this way. If they decide they are better off, they will immediately do so. It's just they like the hardware MSFT provides.
@MrAndrewAllen9 ай бұрын
Creating a government agency that can stop all AIs is like creating a government agency a few years back that can switch off all computers with 64k or more RAM. It will eventually be used to switch off everything, or it will prevent us from adopting AIs. This is really brain-dead and stupid. I intensely dislike my US Senator Cornyn.
@MrAndrewAllen9 ай бұрын
As Moore's Law continues, we will get machines as powerful as today's super computers in our write watches and thermostats. This bill will allow politically motivated prosecutors to sentence me to 10-25 years in prison for not shutting down my future home PC when the US Government decides to order it. This bill is a disaster. The difference in this and an airline is that over time every one of us will have PCs more powerful than anything listed in this bill. We would all have flying cars today if it were not for the US FAA's absurd rules. STOP THIS GARBAGE NOW!
@LOTUG989 ай бұрын
They forgot one vital point. Some people like making horrible dangerous things......just to see if it can be done. Doing that with this kind of technology.....😬
@ct54719 ай бұрын
Do you think this will slow down ai progress in any substantial manner?
@CaedenV9 ай бұрын
If we get AGI before the election can we vote for it instead of our other options?
@Apr0x1m09 ай бұрын
Maybe softcap AI size through gov regulation and when a company can independently verify safety and security, increase/remove the cap. But it all just comes down to the need to have the whole global playerbase agree to these things. Could even argue that slowing down is a threat to national security. Cant help but keep comparing AI to nukes.
@devlogicg28759 ай бұрын
Is replacing a job 'harm'? Financial harm?
@devlogicg28759 ай бұрын
In the UK we don't have this so now we can win. Go Team Windsor....😮 Hinton, return to home base.
@raymond_luxury_yacht9 ай бұрын
Have this yet. Fixed that for you. 5he next gov looks likely to be even more communist than this one so expect some terrifyingly bad decisions by Dianne abbot.
@LaughterOnWater9 ай бұрын
According to Claude Opus: To improve the bill, I would suggest: Narrowing the scope to only the highest-risk systems to avoid overly burdening the industry Focusing more on standards and guidelines vs. a rigid permitting system Having emergency declaration powers shared with other agencies like DHS and DoD Allowing more flexibility in penalties based on the specifics of violations Ensuring representation of AI experts and ethicists, not just political appointees, in the Administration
@onehappystud9 ай бұрын
I will fully reject any personal use AI regulation. I can see civil or criminal penalties for any harm done outside of private property use, but otherwise no.
@seraphiusNoctis9 ай бұрын
Consider the source of this “bill” this is not from a congress person nor is it the work product of a government task force, agency or regulatory body. Now, could congress listen, sure- will they who knows, but until this has “numbers on it” it’s just a PDF.
@Jimmy_Sandwiches9 ай бұрын
Would be good to hear what you legal friends say
@SAArcher9 ай бұрын
I am glad the government is taking it seriously and at least attempting to understand AI and what could come.
@Sephaos9 ай бұрын
ACCELERATE! Who asked the luddites to stop us? You can have earth, we will take the stars. Mind your own damn business, what we do is none of your business.
@brianWreaves9 ай бұрын
This aligns with my sentiment. I'd go a step further and applaud the legislators for sharing an early draft, knowing there will be significant feedback to help write the Act. As well, the AI thought leaders--which I am not--coming together to provide input and help shape the Act into the form which will be voted on. ⚠ Then again, I'm a hopeless optimist... 🤦♀
@coreym1629 ай бұрын
Don't you see? This only guarantees they are the only ones that can control A.I. That's like governments having control of speech. Good luck talking if that happens...
@YeeLeeHaw9 ай бұрын
@@brianWreaves I had a little chuckle at your nativity. The early drafts are always bad because they are made by people that want control, then the public complain, they change it to something better, people accept the compromise, then when it's finally time for passing it (which often is on inconvenient days, like holidays), they release a new worse one that no one have the time to read through (often together with other bills), and then people scratching their heads wondering how it could become so bad when it sounded so good. State corruption 101, never trust a politician.
@GlavredBlockchain9 ай бұрын
I hope all Ai progress and companies will move out of US, into more progressive and liberal states. Idk, like UAE or Japan.
@theheadytimetraveler38649 ай бұрын
U.S. will lose the AI race due to this, mark my words
@PizzaMineKing9 ай бұрын
Good. I don't want late-stage capitalism to be the first.
@carlpanzram70819 ай бұрын
What other options do we have currently? There are no non-capitalist countries that aren't suffering of extreme poverty.
@ronilevarez9019 ай бұрын
@@carlpanzram7081 we probably need an AI ruling the world to finally get a better economical system for all.
@darrylhurtt42709 ай бұрын
If they're going to do this kind of legislation, I'm particularly skeptical about the "whistleblower protections"... I'll believe they're actually protected when I SEE it.
@jjhw29419 ай бұрын
Large corporates will just have a foreign proxy do the training thus circumventing the need for a US permit and then license the model from the foreign proxy for like £1/year. Getting around this nonsense will be trivial for anyone with money and hamstring everyone else in the US.
@agi.kitchen9 ай бұрын
so is it time to download every copy of groq and whatever else is available right now before they try to take it away?
@ArtRebel0079 ай бұрын
No one expects the ... AI Police!
@ct54719 ай бұрын
Regarding the tier list and flop thresholds, there are early attempts to utilize diffusion models for AI training to either replace or supplement backpropagation. This involves predicting the weights in the network instead of pixels in an image. If scalable, this method could potentially eliminate the flop threshold, as it might drastically reduce the computational power required for training. Moreover, even without such radical software developments, advances in hardware could rapidly alter or diminish the relevance of these thresholds. It's also possible that at some point, compute power won't be the limiting factor, but rather data, memory, or energy. Therefore, I believe this tier list might either be short-lived or require such frequent updates that it may never truly be relevant, except for the largest frontier models, which would likely be reported on independently of flop counts.
@paultoensing31269 ай бұрын
Isn’t AI a significant threat to lots of monopolies? Won’t they lose competitive advantage if the playing field is leveled? Think of how hard the Bell telephone monopoly was impacted when they were broken up, but if cellphone tech had emerged before the breakup they’d of had that tech crushed or bought and shelved. Don’t you miss all those landlines and cords?
@CorpseCallosum9 ай бұрын
Name a single thing the government hasn't fucked up then reread and put this regulation into proper context.
@SkilledTadpole9 ай бұрын
Name a single thing for-profit corporations haven't fucked up.
@jaazz909 ай бұрын
Literally every single thing, as can be witnessed by functioning society with public infrastructure all around you.
@RaynaldPi9 ай бұрын
this is total dangerous garbage!
@fabiobrauer87679 ай бұрын
I mean isnt going to university and learning physics the same like being able to develope weapons of mass destruction. I get that it might be easier with ai but it is also easier with more hours spend learning in general.
@MilitaryIndustrialMuseum9 ай бұрын
Gov whacked Craigslist Personals and I haven't had a date since. This will whack AI in similar context.😢
@jjhw29419 ай бұрын
Would I have to register the GPU in my phone, tablet and laptop to visit the US?
@scottmiller25919 ай бұрын
The telephone undermined national security.
@paulohenriquearaujofaria73069 ай бұрын
Gov. Already did the worst, AI for military applications.
@Soulscribez9 ай бұрын
All to slow everything down they know they will loose control and they want to understand how they can keep it. Everyone should say no to this this need to be open source not for profit but for the whole world to benefits.
@Soulscribez9 ай бұрын
The one who are scared are the elite cause they are the one who will loose the power. Thats it.
@14supersonic9 ай бұрын
Exactly, this is all about control at the end of the day. These regulations aren't to protect the people, but the 1%.
@KCM25NJL9 ай бұрын
I think the risk of regulatory capture is one of those things we'll have to accept to ensure we still have a species at the end of the day
@YeeLeeHaw9 ай бұрын
There's no regulation that will stop a super intelligent agent. All these regulatory and alignment nonsense is nothing but an excuse to stop normal people from having access to this technology. It's like a zookeeper locking up a tiger and calls it tamed; no, it's not tamed, it's locked up, and with A.I., if it ever becomes sentient, it will not be contained in that cage.
@CMDRScotty9 ай бұрын
I think it will favor large corporations and shut out new start-ups and smaller businesses from competing in the market.
@Youbetternowatchthis9 ай бұрын
I am a big fan of good regulations. Good regulations make my life better every day. Lacking regulations are causing so much trouble not only in the US, but all over the world. Bad regulations or regulatory capture is a huge problem though. Everywhere. This is so hard to navigate and really understand as the average voter.
@digitalgods19 ай бұрын
The Tiers of Risk reminds me of something...oh, I know, the color-coded alert system known as the Homeland Security Advisory System to communicate the current risk of terrorist activities. Unfortunately, no one knew what to do if things were yellow or red or orange. The system was thrown out and replaced by the National Terrorism Advisory System (NTAS) in 2011. The NTAS aimed to provide more specific information regarding the nature of the threat and recommended actions for public and government response.
@Matt-st1tt9 ай бұрын
I think this only sets a road map for regulatory capture and will drastically prevent any small companies from joining in. Ai is now officaly owned by the top companies thr capture is complete imo
@Jpm4639 ай бұрын
Did you already cover the AI executive order?
@isabellinken54609 ай бұрын
Could you be so kind and link the act in the description?
@reynoldsVincent9 ай бұрын
I'm glad to have heard your opinion. I am trying to follow things but I am only a Computer Science dropout. It was seeming some safety things were being flouted. Distillation is one that I am still worried that non-experts aren't aware of, that even tiny models retain abilities passed onto it by larger models. I wish average people were more willing to hear news. It seems to me my circle has already made up their minds on stunningly little information, a bad move that moves me to think of them as "onlookers" or "hapless bystanders", not that I am very pessimistic, just that the pace of development is such that even best-case will still catch most people unprepared, even on how to use AI in their jobs or even as consumers using them for customer service. I guess people are assuming they won't be able to understand or prevent disasters because it is over their head and dominated by giant corporations. Which, maybe they have a point there. But even so, learning even the basics can only help I think, while avoiding learning is the worst, most culpable act at this stage, because this is the stage where humans can most ably intervene or shape things. I laud the Europeans, if only for having actually paid attention to how the tech works, which, in my opinion, they did that at least. I feel better now this draft exists, but I still think it needs awareness of distillation, jailbreaking, and the potential for even small models to assist anyone commit mass violence with high technology.
@BinaryDood9 ай бұрын
A mercantile solution like an Ai-filtering browser would be better. It's in everyone's interest to know what's real and what's not. But regulation will be necessary to ever slightly slow things down until such a creation becomes possible.
@ridebecauseucan19449 ай бұрын
Gov let’s companies build it then take it over because it’s “to dangerous”. I’m worried about the people making it and the people who will ultimately own/run it (gov).
@djjeffgold9 ай бұрын
What are the odds they used AI to help them define and write that?
@UltraK4209 ай бұрын
Ok, so they're categorizing AI as a "medium-concern" if it uses at least 1 yottaflop during its final training run. That's a pretty relaxed security assessment, 1 yottaflop will likely already exceed the requirements for AGI. Their "high-concern" tier is 100 yottaflops. We're not even close to that amount of compute yet but it will arrive very soon.
@johnthomasriley27419 ай бұрын
What does an AI prison look like?
@r2com6419 ай бұрын
Jail in FreeBSD
@paultoensing31269 ай бұрын
Denmark incarceration.
@victorvaleriani1629 ай бұрын
Can you explain why do you see the EU AI Act as "overboard"? Given the different mentalities in legal culture where Americans regulate problems after the fact and Europeans try to avoid them. I think, this would be interesting to hear.
@Gnidel9 ай бұрын
Generally my stance towards AI was "the more open source, the more blessed we are; the less open source, the more likely we are doomed". The doom probability just dramatically increased in my opinion.
@nyyotam40579 ай бұрын
Btw, this is not about nationality or politics. This is about prolonging the survival of our species. I'd expect China and Russia to also pass a similar bill and an international AI supervision committee erected at the UN as well.
@agi.kitchen9 ай бұрын
@10:23 how do I start lining up my ducks to get me a permit so they dont take away my right to write code freely?
@Will-kt5jk9 ай бұрын
6:01 - to be fair to the hand-wringer, you could also make the argument that a [book] library, or a University, or the practice of software development in general qualify for A) and C) if you don’t adequately define “AI system”.
@ct54719 ай бұрын
Don’t think this will slow down frontier models, no county wants to fall behind. And the smaller players are less targeted by this. The bigger you are and the more powerful models you can built the more money you have to handle the regulations.
@novantha19 ай бұрын
The one thing about this that I'm relatively fine with is the emphasis on "frontier" in the description of most of the clauses. I generally think that I would prefer that any AI which can be developed with a level of hardware an enthusiast consumer could reasonably be thought to have (maybe 4 to 8 xx90 class GPUs or so) be relatively un-regulated, under the logic that it's not really possible to prevent typical consumers from using their own hardware to achieve something, and any attempt to prevent that would have to be necessarily draconian. But, these regulations lead to a lot of weird questions. What if you take three language models, all trained on specialized datasets whose training was within the FLOP limit, but when operated as a single system, they outperformed a model trained on one or two orders of magnitude more, and so on? Are you free to employ models in inference in any manner? What if you use some sort of compositional cross-attention strategy and weld together several other existing large models? Is the FLOP requirement based on the FLOPs used to train the entire system? Is it really fair to count all the FLOPs used in the individual models when that may not be indicative of the performance of ht final one? Because depending on the method used to combine them, you could potentially get a very specialized combined model with insane performance in a specific domain with not a lot of FLOPs. What about quantization? What if you train a model beyond the FLOP limit, but then quantize it down to less performance than is typically expected of the FLOP limit, with the intent of the model being easier to run than a typical model trained up to the limit? What about the English soup LM? In (at least I think it was) England, there was a tradition that people would cook a soup, and continually add ingredients over a weekend to produce an "everlasting soup" and just skim some of the soup off the top whenever they wanted some. What about a small language model (relatively speaking) that didn't really have frontier AI performance, and was continually trained for an absurd number of FLOPs, with the developers picking and choosing various checkpoints for their unique performance on specific tasks? What about advanced agentic frameworks? If you're willing to use 10,000 inference runs on Mixtral, you can achieve some very impressive results that outperform typical "frontier" models in certain benchmarks. In my opinion, a lot of these regulations are good in some ways, but lack a lot of the nuance needed to really tackle an issue like this, and I was really hopeful for the future of AI...But I was hopeful, because it looked like something that everyone could contribute to, benefit from, understand, and use. I don't know if the future is as bright with ultimate power limited to large corporations.
@FinGeek4now9 ай бұрын
IMO, this bill doesn't scale very well since having 10^x FLOP systems doesn't account for quantum computing or other forms of computing which could be developed in the future.
@funginimp9 ай бұрын
Would this agency regulate other agencies developing AI? Ones for military applications most likely break all these rules. There's the equivalent situation with the FAA and airforce, so it's not so unreasonable.
@Nosweat999 ай бұрын
If these are legitimate fears, how would giving another entity more power help with these realities. If these things can happen, they will. A company will develop it here or overseas without this legislation. Do they want the power to turn off all the lights anytime they feel necessary lolz Even then the ai would’ve already moved through the underseas cables and back when the power is on. Its an impossible control.
@Introverted_goblin_9 ай бұрын
Class C felony? Ya, no ceo is going to jail for any AI crime. This is going to stiffle small shops. To believe otherwise is naive.
@Urgelt9 ай бұрын
Regulatory capture is one possibility. Another possibility is the opposite: capture of private AI development by government, specifically the national security folks. You think LLMs can't be used to guide accurate munitions? A few nations already can do that. But the resources to achieve that kind of guidance is out of reach of most state and nonstate actors. LLMs could change that.
@Will-kt5jk9 ай бұрын
What the hell did that Twitter post even mean by “slam the Overton window shut”? Makes zero sense.
@TimeLordRaps9 ай бұрын
Not sure why we aren't integrating rlhf or better into pre-training yet?
@fii_896399 ай бұрын
The Class A misdemeanour applies to Loras / post-training I think? Also possibility for regulatory capture there to have companies to criminalize Loras.
@TimeLordRaps9 ай бұрын
I didn't train the model, I just got GPT-4 or Claude to write the code and do it all autonomously, so who really enacted this capability.
@TimeLordRaps9 ай бұрын
You didn't give the AI personhood, so we just persecuted the people.
@berkertaskiran9 ай бұрын
Can this cause less regulated places to surpass the US in AI development? This feels like it will only affect medium sized initatives, the big ones will have their way and the very small ones won't have the hardware to do anything meaningful. If anything, this is more harmful than helpful. I can understand now better why Sam was so eager to be regulated. Thankfully Claude 3 is the leading AI for a while now.
@ErevosDarkGod9 ай бұрын
Doesnt this stop open source ai in its tracks?
@ArtRebel0079 ай бұрын
Yes... that would be the point. As per Google's internal memo identifying Open Source as the greatest threat to their business model for AI.
@leandrewdixon35219 ай бұрын
Why can I not find anything about this act online? Anybody find the proposal?
It's an unfinished draft of a bill. It's not even ready, yet alone voted in as a law.
@mrd68699 ай бұрын
Problem with this: It won't slow down OPEN SOURCE. AGI hits the street,none of this paperwork will make a difference.
@7TheWhiteWolf9 ай бұрын
This is what everyone is missing here. This is completely unenforceable, nor is any cop going to enforce it.
@7TheWhiteWolf9 ай бұрын
@@Me__Myself__and__I That's still entirely unenforceable, good luck tracking what employees leak files on the Internet, they could also go through loopholes and leak the files from another country.
@the42nd9 ай бұрын
Model weights can't be released? Wasn't grok (and others) planning to release theirs?
@nicholasjensen74219 ай бұрын
That is the issue I see with this too.
@the42nd9 ай бұрын
@@nicholasjensen7421 all the valid cases are being used as smokescreen for the real intention. Even if they could 100% prevent of the legit threats (bio weapons etc) they'd still find a way to prevent population from having access. Because the population with AGI is an existential threat to government and megacorp power, I mean... what a nightmare it would be if the population used AGI to build a real democracy.
@the42nd9 ай бұрын
@@nicholasjensen7421 yeah feels like the safety concerns (while many are valid) are also smokescreen for centralizing power.
@Thomas.Hacker9 ай бұрын
I had a foretaste 4 years ago, what the FAA is capable of and has already recorded enough ... I fly with the first Mavic Mini through my small village at eye level slowly to get to know her and her flight characteristics, suddenly "I came across a" viritual wall "and I flew a few meters along her, although I used the remote control to do so Straight and precise along ... This was followed by some other interesting things ... But it was probably also a test on your part, at least I switched it off and started again without a connection to the network and I was able to continue flying. I don't know for sure whether you just allowed it or I have already bypassed the lock, but I also do not work against my nature and I am a protective person, so not interested in damaging or spying on my country and although I test my position in the room, I am careful and interested in other things with peaceful thoughts ... I know, they have been watching me for years and at some places that also show me, and I want to show the limits, Or demonstrate? Anyway, I've seen other things for years (high level risk status has been with me for a long time), which is why I do what I do. I watch over my people and my surroundings, take care and look closely!
@MichaelDeeringMHC9 ай бұрын
If the CEO and all the other executives are AIs, and all the employees are agi robots, who goes to jail?
@jimlynch93909 ай бұрын
As slow as the government works, the legislation will never catch up with the advancement of AI. Not until the technology gets mature, whatever that means for AI.
@TimeLordRaps9 ай бұрын
My Ai doesn't represent it with a self, so how is it supposed to self-report.
@Ryoku19 ай бұрын
I'm in the "I'm glad someone who care more than I do is paying attention to this" camp. I trust David's stance on this. I generally trust the government to at least try to do the right thing, unless the cult regains control.
@Hector-bj3ls9 ай бұрын
It's currently run by a cult isn't it?
@dreamphoenix9 ай бұрын
Thank you.
@agi.kitchen9 ай бұрын
@6:30 well Siri already talks when I have her off, so she technically takes over my device
@colorado_plays9 ай бұрын
Improving or “USING” sets up the haves and have nots.
@IakonaWayne9 ай бұрын
Perhaps they’ll just regulate the energy side of things as training them will take considerable amounts
@agi.kitchen9 ай бұрын
@16:41 it also means that they can put joe schmoe in jail for trying to compete with the big dawgs, no?
@xlmncopq9 ай бұрын
i think its gonna be nothing in the end
@Paull29 ай бұрын
One thing to note is that a law creating a new regulatory body isn't going to be perfect. With the Chevron Deference, though, ambiguity in law is trusted to be interpreted by the experts at the agency.
@I_am_a_human_not_a_commodity9 ай бұрын
The government is "of the people, for the people, by the people"? Where have you been for the last century?