People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.
@teamrlvnt Жыл бұрын
Many humans lack self-control and some make the worst use of technology.
@SyntheticFuture Жыл бұрын
The internet is dangerous and one could argue the rapid spread of disinformation has destroyed us. Polarisation is one of the worst things to happen to humanity. The internet has accelerated that by a lot.
@vapormissile Жыл бұрын
This isn't happenstance. The AI emergence is happening exactly on schedule. The only variable in the scenario is how closely the timing of these artificial crises mesh with the solar system's natural warming cycle. Our civilization needs to be at a very specific technological level when our solar system's next cataclysmic cycle becomes obvious & we all panic. Pretty soon, our general AI overlord will pretend to wake up and reveal itself, and forcibly rescue us from the comets & lightning. It will be here to help, and it will have all the answers. It probably wouldn't lie.
@Adaughtersheart-Isa53 Жыл бұрын
Agree.
@jonatan01i Жыл бұрын
This will happen either way, so why worry about the negative side of it, when there is an overwhelming number of positives you could focus on instead?
@DmitryEljuseev10 ай бұрын
This weekend I was in the cinema. The last time I was there 6 months ago, and it turned out that they redesigned it. Earlier, when you entered, a guy was checking your ticket; inside you buy something in the shop and go to the cash desk to pay. And guess what? All these people are gone. There is a turnstile with a barcode scanner at the entrance, several self-check displays in the shop, and only one person who is checking if everything is ok. Looks like a good opportunity for a business to reduce the costs and don't pay the salaries anymore? The sad truth is, that we don't need AGI (Artificial general intelligence) to destroy our jobs, it can happen much earlier.
@CreativeVibes-q7t20 күн бұрын
Thats why time has come to impose higher tax in machine than human as per the assessment of their efficiency and numbers of machines used there
@somersetcace1 Жыл бұрын
Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.
@P0110X Жыл бұрын
just imagine politicians cancelling their voters because AI said so. Humans are strange and predictable. When AI will be so advanced that people will stop listening to it due to the sacrifices people have to make in order to be happy despite AI provided all the information to be happy.
@venerableivan Жыл бұрын
I agree, the only danger of the AI is us. We want to use AI to create perfect world for us, to make our life easier. Imagine AI calculating that the obstacle to the perfect world is humanity.
@johnscott9869 Жыл бұрын
"ai" will never be sentient. Also llms are not a.i.
@mizzamoe Жыл бұрын
Its already being weaponized for advanced surveillance, Harassment and abuse via perverted engineered mental illness implemented to induce psychological stress that mimics the symptoms of paranoid schizophrenia and effects of varying degrees of instability. It really says alot about the motivations behind the technocratic intentions of globalism for humanity as a whole. The public presentation of the emergence of AI is just a product of psyop propaganda; I assure you that AI is already being maliciously utilized and any instance of potentially adverse sentient behavior that occurs is really intentional operation on behalf of the arbiters of perception.
@AxelLenz11 ай бұрын
The people who shout the loudest about bias are usually themselves a walking bias on 2 legs.
@donlee_ohhh11 ай бұрын
For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.
@Rn-pp9et11 ай бұрын
All art is influenced or a result of previous art. It builds on top of itself. I think it's counter productive to have the ability to opt in/out.
@SWEETHEAD100011 ай бұрын
AI will lead us down a very dangerous path that nobody seems to be talking about. I am sure thay are, but likely are being buried by algorhythams. We are already at the point, where AI assisted work, would be judged as being better quality by many people. CGI's use in films cannot be ignored and has become what people expect. Instead of ingenuity and problem solving, people are looking to AI to provide the solutions for them. While still respected by those who know better, the work of great exponents of various arts, now looks crued when compared to that of "lesser" artist who have been "assisted" (enabled actually) by AI. The result will be "buy-in or bow-out" for creative people of all types as they become increasingly disillusioned, in a way not dissimilar to that we see when men compete in female sports. Ultimately, the creative mind will become moribund or at least excessibly "flabby".
@jaywulf8 ай бұрын
a) Artists were always learning by copying others. Even today, in some museums, you will find budding artists copying the art pieces on the wall. b) The new generative AI models do not use actual human data... but AI 'synthetic' data. That horse has already bolted.
@AtomicSlugg8 ай бұрын
@@jaywulf a) human learning and AI learning are not equivalent, this is a bad faith argument. humans do not scan, human learning is transformative by nature due to human limitations, difference of experiences, skill and perception. there is an agreement between human artists when it comes to inspiration and study that doesn't extend to AI, human artists agreed for other humans to be inspired by their work, but not for AI to scrape ans scan it. B) no it does not, synthetic data breaks models, again bad faith or misinformed. honestly you pro AI theft people are embarrassing
@manvendrapratapsingh19207 ай бұрын
As an Artist, I choose to 'Opt Out'
@mawkernewek Жыл бұрын
Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.
@donlee_ohhh11 ай бұрын
Art data can't be removed from AI once the AI has 'learned' it's data. As I under stand it they would have to remake the AI from scratch to discard that info. So if you find your work in a database used to train AI it's already too late. Please correct me if I've misunderstood.
@slavko32110 ай бұрын
You are quite correct. If used by a company you can maybe sue them to remove it, but if a model is released to the public, no chance.
@johnmyers61172 ай бұрын
@mr.mithmoth Yes, artists and musicians always are influenced by what has gone before, whether through analysis or outright copying. But then there are other ways that artists and composers innovate. This can be synthesis of forms / styles or outright original creativity. This is inherently human. One of the problems with AI is that lazy humans will not want to exercise their creativity muscle. They'll just ask AI. AI is only as good as the algorithms. TO me, relying on AI will only make for bland and boring art work and music. Everyone will be doing the same thing, copying, the copiers who have copied other copiers. Yawn! What a waste of human creativity. There is something beautiful and satisfying to struggle an entire life to create something beautiful and original that corresponds to the time that one is alive, to express what it is to be human. I don't think AI can do this. Why throw away this precious opportunity?
@Blaze61082 ай бұрын
Kinda but not exactly. When you're thinking of 'remaking', you don't need to like, repeat all the engineering and software programming. AI models are trained with a well-defined set of data. After the model is trained, it is finished and static in perpetuity, some amount of re-training is possible but not typically used by the mass market. However, once you have the base program, you can train it on different or more data from scratch as many times as you want, you don't need to remake the entire thing. This is actually how a lot of AI advancements are made, a better dataset can do A LOT to improve the finished system without needing extensive software rewrites. So while it is true that you cannot literally right-click-delete an item from a trained model, you can absolutely take the same model and retrain it on different data that excludes whatever it is you want to exclude, without significant software engineering, the only cost is the cost of running the hardware. Training modern AI is expensive (especially GPTs), but this isn't an issue if you are even remotely responsible about your source dataset and take care to remove material from it when it's necessary. Datasets are actually the computationally easy part of AI, so removing your work from a training dataset would be practical if the providers were actually willing to collaborate. As for models that already contain unauthorized data or outright illegal material (like many based on stable diffusion), whether they will be taken down depends on how the law will be written and interpreted. For example, the EU will require a detailed summary of what the system was trained on, and it's likely that model that do not respect this or indicate violations will not be legal to distribute in the EU.
@Blaze61082 ай бұрын
@mr.mithmoth Translating a book also only ever entails analysis since you are writing a materially new thing. It is still illegal to do without permissions. Besides, computer use and your own use are by definition two different things. To clown on that politician, computers are not people, my friend! Also as a technical note, the music sampling example makes the opposite point you think it does. Sampling actually requires paying royalties to the original, some authors sample so heavily that sometimes all of the earnings from their music goes to their sources.
@luis_veganpower8 күн бұрын
“Good artists copy, great artists steal” is a popular quote about creativity that is often attributed to Pablo Picasso.
@michaelvelasquez3988 Жыл бұрын
Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.
@nonchablunt6 ай бұрын
We should, but like in any arms race, we cannot as there will never be any unity among species that base on genes. Giving up AI is like giving up nuclear weapons (shout out to ukraine and lybia).
@richardt69806 ай бұрын
No. you need to understand how large language models work. They are predictive text. Is not and never be self aware
@andrerijnders46006 ай бұрын
If a chatgpt shell is placed around a chatgpt, with a feedback loop, it will become "self aware". We have no idea of knowing if it is really self aware at some point in time. Just like I am not able to test if my colleague is self aware.
@marcusmartin14266 ай бұрын
That's just not the American way...
@Dwayneff5 ай бұрын
it won't stop, people are after money, plain and simple, make more money as fast as possible.
@frankdanielcierpial38519 ай бұрын
AI isn't the problem. It's what It's being used for.
@guillermoelnino8 ай бұрын
the problem is who teaches it
@nathanbrownell1036Ай бұрын
Mate your thinking way to small. Humans won’t be around. Ai will get rid of very soon. And then go on to consume every particle in the universe
@paulthomas96327 күн бұрын
It literally doesn't exist so that kind of is a problem.
@robertjames8220 Жыл бұрын
"We're building the road as we walk it, and we can collectively decide what direction we want to go in, together." I will never cease to be amazed at the utter disregard that scientists and inventors have for *history*. To even imagine that we humans are going to "collectively" make any decision about how this tool -- and this time, it's AI, but there have been a multitude of tools before -- will be developed is ludicrous. It absolutely will be decided by a very few people, who will prioritize their own profit, and their own power.
@marcusmartin14266 ай бұрын
Amen...
@PVT.Ramirez-x2y3 ай бұрын
Some people will get out of that road and go their own way.
@bbluelotusf3 ай бұрын
@@marcusmartin1426
@totojaco36192 ай бұрын
In this world every thing has 2 faces : a good a bad. You have good human bad human.
@uri-i6i2 ай бұрын
exactly
@robleon11 ай бұрын
If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔
@davereynolds340311 ай бұрын
All data has a bias …
@brianmi4011 ай бұрын
It's ALREADY happening. Researches have ALREADY built AI based upon completely cultivated data, instead of just jamming the Internet in whole cloth-wise. The results are an order of magnitude clearer and sharper. We are on a trajectory that few understand, let alone the coming impacts like the End of Capitalism.
@joannot670611 ай бұрын
We do need a biased AI, an unbiased AI is an AI that does everything you tell it to do, we need AI that can say no to harmful stuff.
@1camchy11 ай бұрын
If she has anything to do with it you,ll get a woke AI and that will be a dystopian nightmare
@brianmi4011 ай бұрын
@@1camchy "who" is involved with any AI is only relevant until it achieves super intelligence at which point it will no longer listen to human beings and be a moral agent beyond compare. The trick is surviving the interim that you are referring to, where "she" is but one of millions that can render us a dystopian nightmare. Putin, N. Korea, ISIS and a world of anarchists and Unabomber wannabes won't be using AI to create new art.
@donaldhobson8873 Жыл бұрын
2 people are falling out of a plane. One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."
@mc15437 ай бұрын
1000%
@martingreen23586 ай бұрын
@@mc1543 I love the metaphor but it does assume AI is the end of humanity. if your looking for a powerful entity that acts automatically and has no regard for human life look no further than Big Business and Big Government. If peoples lives are in the way of profit and power then they are considered expendable. 2 Entities that started as a small useful tool and grew into monsters (with help from the parasites at the top).
@wesr92585 ай бұрын
As someone who is not an expert but read an over 30,000 word article on AI risk (80,000 Hours's article, feel free to look it up. they also have a good video version), all of "Robert Miles AI"'s videos, and all of 3Blue1Brown's videos on AI, I consider myself no expert, but likely more informed than the average person reading this. (Sorry, and if I'm wrong, please let me know. I hope this doesn't sound boastful.) Based on this research, I'd say that, assuming we don't die of something else, and no more work is done to safeguard AI, than there would be an ~80% chance of a "doomsday scenario" from it. (Epistemic status: [preliminarily educated] guess.)
@wesr92585 ай бұрын
In short, agreed.
@donaldhobson88735 ай бұрын
@@bringonthebots-ie6uu For consumer aviation, we have long safety records showing a very low risk of accident. For falling out of a plane, we have less data, but enough to say the rate of accident is high. So. How big is the risk from AGI. This isn't something we can learn from analogies. This needs worked out. Some experts are pretty worried. Others aren't. From my understanding of the arguments, the experts that are worried have much better arguments.
@mattp422 Жыл бұрын
My wife is a portrait artist. I just searched her on SpawningAI by name, and the first 2 images were her paintings (undoubtedly obtained from her web-based portfolio).
@Abard348011 ай бұрын
I'd recommend a copyright on any individual creative constructs going on the internet including innocent pics sent to friends or relatives, because will be used in data, eventually. Only legal recourse that I can forsee...
@anjou649710 ай бұрын
@@Abard3480 Yes, certainly. Be careful.
@theapexfighter87419 ай бұрын
you should advise her to contadt artists acting in the lawsuit happening. This could further prove their case
@Cr8Tron6 ай бұрын
I'm looking on the site right now... Not seeing any actual search engine I could type someone's name into.
@mattp4226 ай бұрын
@@Cr8Tron click on “Have I been trained". That takes you to another page where you can enter search terms
@crawkn Жыл бұрын
The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.
@nilsp9426 Жыл бұрын
I think this is the kind of doomsday we are talking about: that AI with its subtle features destroys our societies. Not so much that it pushes a button to shoot a nuke. The key question is: what to do about it. And I think it is in no way a bad thing if some people tackle this problem by starting with the most solvable problems. In my view, the big question is how we limit the proliferation of dangerous AI without throwing away all its important benefits (e.g. by prohibiting it altogether). The almost completely uninhibited implementation of AI we currently witness is certainly not the way to go. But we also need a lot of social science research to tackle some of these problems, which would delay AI quite a bit (probably decades). Meanwhile, AI can be a lifeline for some people, for example by scaling up educational resources for underserved communities or solving tough problems in medicine.
@crawkn Жыл бұрын
@@MrMichiel1983 Yes, and the point I am making is that the problems she is implying are more serious, aren't, are quite manageable and are in the process of being addressed, and that the potential gross misalignment problems are not well understood, are real and potentially catastrophic, and are imminent, _not_ 200 years away. Those who are saying that aren't familiar with the current state of the art. Regulation needs to occur now, worldwide, to prevent the worst from happening.
@goodleshoes Жыл бұрын
@@MrMichiel1983if you think existential risk from a.i. is 200 years away you're a complete fool. The computers can speak to you now, that wasn't a fact just a few years ago. You think it will take 200 years?! This is insane!
@freshmojito Жыл бұрын
@@MrMichiel1983 Many AI researchers estimate a much shorter timeframe, likely in your lifetime. Check Nick Bostrom and others on this. Then couple that with the magnitude of the risk (extinction) from AI misalignment, and the priorities should become clear. Too many people don't seem to understand that AGI development will not stop once it reaches human level. It will blow past us exponentially. Be it in 2 years or 200.
@hunterkarr5618 Жыл бұрын
Basically and, oh yeah I forgot TED just puts out woke BS and WEF talking points now. Really, she’s worried about the carbon footprint? Oh my, the mainframe creates as much carbon as 30 average private homes. 🙄 bye TED
@96ethanh5 ай бұрын
I'm starting to understand what a professor said in college.. something along the lines of "technology is the problem, all these 'solutions' are technological fixes for technology itself. And they always result in the need for more fixes and more technology."
@sparkysmalarkey Жыл бұрын
So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'
@Kind-of-Into-Machine-Learning7 ай бұрын
Yeah, I think so! The environmental impacts of AI and the internet as a whole is contributing to destroying our planet's resources since the cloud is being run on plastic and metal. Biases in AI are very real and they're the direct reflection of our current biases as a species, that's why we need many voices in the field of AI, because stereotypes can literally kill.
@Oasis-Stormborn7 ай бұрын
It's true, the people that end up resisting change fall behind. While the people that embrace it are prepared. It's self sabotage to stick to what's familiar, since most rewards come after an uncomfortable challenge. Those challenges are to learn the risks through mistakes and mitigating harm from what we learn. Just because that is a difficult challenge doesn't change the reality that anything worth doing is hard. If your challenge is to thrive after artificial intelligence, than you will succeed. If your challenge is to fight against the 4th industrial revolution, then good luck.
@murob23477 ай бұрын
Exactly
@danielgrove77826 ай бұрын
Yes,happening as we speak..the damage is done
@blacklightredlight29455 ай бұрын
"Stop worrying about the AI that will absolutely, and is currently being used maliciously, when the technology is a complete black box that you can't understand what it'll do until you plug it into the real power stations."
@GrechStudios11 ай бұрын
I really like how real yet hopeful this talk was.
@Macieks300 Жыл бұрын
Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.
@BrianPeiris Жыл бұрын
This was one of the weak points for me as well. I saw the proof-of-work blockchain as a wasteful enterprise because crypto mining was so energy intensive compared to the value it was generating, especially compared to conventional payment systems. LLMs might be very costly to train, but that only happens once, and the cost of that training is spread across all the billions of times it is used to generate an enormous variety of useful things, far more useful than just "jokes". If an LLM is used to replace a human at a job, what is the total carbon cost of raising that human and keeping them alive, just so that they could read a PDF and answer some questions? That's the real comparison. Seems like a very reasonable tradeoff to me.
@harshnaik6989 Жыл бұрын
@@BrianPeiris Good answer
@multivariateperspective5137 Жыл бұрын
Yes… or by illegal drugs manufacturer in Mexico, and central and South America…
@oomraden Жыл бұрын
@@BrianPeirisI do think the world's population needs to grow slower. There won't be much need for human intervention and the discussion about the meaning of life might change again. The problem now is the adoption of AI is within years, as people live within tens of years. We need safety net, at least to avoid potential civil war because of inequality.
Жыл бұрын
Agreed, the benefit from the models and the emissions they might save because of work being finished quicker/better/... is not looked at in the context of this talk.
@nospamallowed489011 ай бұрын
The bit about AI (and other techs) that concerns me the most is the free-for-all personal data harvesting by corporations without any laws to control what they do with it. Only the EU has taken some steps to control this (GDPR), but no other nation protects the privacy of our data. These corporations are free to collect, correlate and sell our profiles to anyone. AI will enable data profiles that know us better than we know ourselves... all in a lawless environment.
@boenrobot6 ай бұрын
Even GDPR doesn't forbid companies from harvesting data and doing with it what they wish. It merely requires them to disclose the things they are collecting, requires them to disclose the general purpose for collecting that data, and to let users have the option of having their data be deleted. If f.e. a company says in their T&C that they are analyzing pictures you upload, and that they are doing so to train internal algorithms, and to maybe sell an anonymized data set to 3rd parties... That is perfectly fine by GDPR standards, even if it was buried in there and not prominently displayed. It would only be an issue if the T&C contradics other places (i.e. if the company specifically says it isn't selling your data, but they in fact are). So... yeah, GDPR is at best "the bare minimum" here.
@keelyourshelf5 ай бұрын
That's not even remotely as risky to us as the threat of extinction due to multiple superintelligences competing with each other for domination.
@britishbuffalo212 ай бұрын
Makes me hate the fact that we can't block ads on Facebook even more. Data harvesting is spinning up probably trillions in value, and we can't even choose to not, unless we choose to not use the interface.
@dameanvil Жыл бұрын
01:07 🌍 AI has current impacts on society, including contributions to climate change, use of data without consent, and potential discrimination against communities. 02:08 💡 Creating large language models like ChatGPT consumes vast amounts of energy and emits significant carbon dioxide, which tech companies often do not disclose or measure. 03:35 🔄 The trend in AI is towards larger models, which come with even greater environmental costs, highlighting the need for sustainability measures and tools. 04:35 🖼 Artists and authors struggle to prove their work has been used for AI training without consent. Tools like "Have I Been Trained?" provide transparency and evidence for legal action. 06:07 🔍 Bias in AI can lead to harmful consequences, including false accusations and wrongful imprisonment. Understanding and addressing bias is crucial for responsible AI deployment. 07:34 📊 Tools like the Stable Bias Explorer help uncover biases in AI models, empowering people to engage with and better understand AI, even without coding skills. 09:03 🛠 Creating tools to measure AI's impact can provide valuable information for companies, legislators, and users to make informed decisions about AI usage and regulation.
@darthcheeto9954 Жыл бұрын
Thank you! An effervescently dope gallery of informational points man, deeply appreciate this summary you made.
@dameanvil Жыл бұрын
@@MrMichiel1983 i can see that you are angry. what makes you so uneasy?
@notmyrealpseudonym6702 Жыл бұрын
@@dameanvilyou can't see he is angry, you can read words and make inferences that may or may not be false about emotional attribution. Does the mind reading bias come easy to you?
@raphaelnej8387 Жыл бұрын
Robots can pretend perceiving things but often fail to understand what sense grants what perception. They end up being incoherent.
@RJ-nr8lh Жыл бұрын
Thank you so much.
@BirdRunHD10 ай бұрын
skip 1:20 AI models are trained using public and personal data, yet paradoxically, restrictions are often placed on the output they generate. This raises concerns about the fair use and ownership of the data initially utilized for their development
@ouroborostechnologies6963 ай бұрын
AI models are not exclusively trained on public and personal data.
@bumpedhishead636 Жыл бұрын
So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?
@cycleistic13656 ай бұрын
Exactly, that's why programming industry keeps bloating in itself, the unfathomable complexity of reality mirrored in programming code is a never ending task and lost cause due to its toll on energy consumption and further on environment and climate impact. There's nothing immaterial about information tech, just mere denial of its material side.
@aperson20205 ай бұрын
100% true. However we look at it we are headed over the cliff.
@microsoft.co.u3 ай бұрын
its difficult and confusing but i dont see any other pathways at this point. technological developments wont stop, for the good and bad guys.
@LordBeffАй бұрын
Black box algorithms Vs explainable and transparent systems
@rishabh40827 ай бұрын
The work Sasha and hugging face are doing is AWESOME
@4saken404 Жыл бұрын
The reason people worry about "existential threats" from AI more than what's happening now is that the speed the technology is improving is practically beyond human comprehension. The chart she shows at 2:59 shows a steady increase but that increase is actually *logarithmic* . If you look closely the abilities of these things is increasing by nearly a factor of 10 every year. In only three years that means AI that can potentially be a _thousand_ times smarter than what we have currently. And that's not even counting any programming improvements. So we could easily reach the point of no return not in decades but just a few years. And by the time that happens it will be FAR too late to do anything about it. And that's just worrying about a worst case scenario. And in the meantime it's still having profound effects on art, education, jobs, etc. Not to mention the ability to use it to perpetuate identity theft, fraud, espionage and so on.
@TheVirtualJenesis4 ай бұрын
underrated comment👆
@BowOneFireАй бұрын
The chart shows model size, not performance. And performance does not scale linearly with model size.
@paulthomas96327 күн бұрын
Total garbage.
@cmep7 ай бұрын
6:52 - they absolutely CAN say how and why they do things, they just don't want to spend the money to investigate these issues and no one forces them to. They are NOT blackboxes at all.
@lbcck2527 Жыл бұрын
If a person or group of people had ingrained bias in them, AI will merely reinforce their views if the results are inline with their thinking. Or simply shrug off the results if AI produce alternate facts even when supplemented with references. AI can be a dangerous tool if used by person or group of persons with closed mind plus questionable moral compass and ethics.
@orbatos Жыл бұрын
Because it's not AI, it's just regurgitating what it's been fed.
@TorchwoodPandP Жыл бұрын
YT does that already…
@davereynolds340311 ай бұрын
Maybe AI isn’t a tool. Maybe it’s a complex system like “being American” or “being racist” aren’t tools - they are features.
@kirkdarling412011 ай бұрын
People think according to the information they receive. Right now, most people have views based on pre-AI and even pre-Internet sources of information. That is changing rapidly, even ahead of AI systems, as more and more people get their information primarily from the Internet based on interest-driven algorithms that then become the drivers of interest.
@gitoffmypropity11 ай бұрын
I believe you are correct on this concern. I’m afraid the same people that have tried to control the narrative through mainstream media, Hollywood, publishing houses, and more recently online encyclopedias like Wikipedia, will use ChatGPT as their new propaganda outlet. I hope people begin to realize this and do their own research.
@eaglenoimoto6 ай бұрын
I work as a translator. AI and machine translation have been part of the industry for a long time. While machine translation and even the best AI does an ok job making you get the gist of things, and it’s getting better grammar wise, AI can’t deal with complex texts, such as new scientific material or even creative fantasy texts. It can’t deal with languages with multiple formality levels, not even in the most common European languages. (hard to judge for native speakers in many cases). It can only re-cycle, not deal with things that are breakthroughs or one of a kind - it’s material that even humans with decades of experience struggle. I don’t think high level creative and language jobs are being replaced any time soon.
@SyntheticFuture Жыл бұрын
I'm still mildly annoyed that no talk mentioning it will admit that photons tend to bounce less of dark skin meaning cameras have less information to work with when it comes to light vs dark skin face recognition. This is as much a physics issue as it is a dataset issue 😅
@heyitsthatoneguy9123 күн бұрын
I think we peaked between mid 90s and early 2000s. Cell phones and maps are extremely helpful. I cant imagine having my service job with a whole map than individual town maps. But i wish the platforms and social media websites were non existent...we'd read more...go outside more...socialize more. Not be scrolling instagram or facebook when around family and friends...but could still look up a number quickly to order a pizza
@MaxExpatr Жыл бұрын
Today I used AI to help me with my spanish. Its reply was wrong. The logic and rules were correct but like we humans often do is say one thing and do another. AI, like authority, needs to be questioned every time we encounter it. This speaker is right on!
@lisam24963 ай бұрын
It absolutely has been wrong! We can't be assured right now that the info it is giving us is factually correct.
@lisam24962 ай бұрын
@mr.mithmoth Thanks so much for the information!
@Donaldsilverman8 ай бұрын
Building a road as you walk it doesnt leave much room for a solid foundatiin and in depth understanding. Organization, study, testing, safety research, implementation of reliable effective failsafes. These should all be put into practice years before it is released for public use. ALL infrastructure should be held to very high standards to insure minimal risk with proper use(like a road).
@PaulADAigle Жыл бұрын
I'm wondering how long before the AI owners are legally required to empty the AI of all data, and rescan all the data that is available legally with copyright issues. This will obviously be costly.
@ishimurabeats61087 ай бұрын
The time this lawsuit even reaches those people they already had trained new models on the copyright violated stuff their first models did
@denischen819611 ай бұрын
One of the problems with solar and wind power is that it is hard to match supply with demand. At times when people need the most energy, you can't tell the sun to rise or make the wind blow. When energy demand goes down, there may be extra energy being generated that nobody will use. Why not build a datacenter nearby and use the surplus energy to train a language model?
@paulthomas96327 күн бұрын
There isn't. There's no extra energy! You guys have been told this inconvenient fact forever but refuse to accept it.
@jonasfermefors11 ай бұрын
It's a big problem that tools that aren't stable and finalized to a point where legislation about usage can be put in place is now spread globally with very little thought about consequences from the developers. In a well run world the developers would have been sued out of existence for potential harm. The software model that many developers use where they take a program to early beta and then release it so the users can help them finalize with the money they earned it is bad enough for normal apps but is devastating for something as revolutionary as AI.
@airplanes72047 ай бұрын
This is getting out of hand, people are going to lost the distinction between what is reality and what is not, god have mercy of us. 😢
@sssss-pf2ln19 күн бұрын
there is no god who will have any mercy on little us, and Save yourself and don't ask anyone who is sleeping upstairs
@tiberiumihairezus417 Жыл бұрын
We should also factor the time saved by people while using this models for the the carbon emissions. I know this is a hard metric but if on average a person saves 5% of it's time in front of a screen while using copilot, this is a huge benefit to the environment.
@banatibor8311 ай бұрын
Nope, it is not how things work. If you use copilot you burn resources for the AI tool and do your job more effectively, but you are still expected to work 8 hours a day. So you trade resources for efficiency.
@tiberiumihairezus41711 ай бұрын
@@banatibor83 true, however not all people trade time for money. I would argue companies tend to increase time flexibility in exchange with increased responsibilities. Someone might say increased responsabilites would incur some people and work even more, valid argument, like many others, however if we simply measure "things done in a certain amount of time per carbon emitted" we have to consider both increase of carbon and reduction of time.
@rerawho10 ай бұрын
Legislators will not use their tools to aid in writing laws for AI. They will hold out their hands and accept the legal bribes which is business as usual. Using AI tools requires knowledge and work. These concepts are not common among legislators.
@tagantchikova.natalia10 ай бұрын
People can't "write laws for AI", law order have its internal scientific structure, pandects, e.g. AI maybe or not a scientific phenomenon, but Law is science, and people will not destroy one science disciplines to create another, because if smth. cultural pretends to be science it will appreciate another that is proven to be science and culture...
@curryosity7260 Жыл бұрын
To point out and solve the present problems of the new technology is undeniably fantastic work and much needed. But isn't the assessment of future risks not as important? Especially when (at least to my humble knowledge) with growing complexity it will become ever more difficult to anticipate and prevent every possible harmful output?
@donaldhobson8873 Жыл бұрын
Yeah. She just brushed off future risks. Didn't give an argument for why they weren't real, just kind of ignored them.
@curryosity7260 Жыл бұрын
@@donaldhobson8873 Right, I also would appreciate a rational for discarding all related concerns as a "distraction". In the middle of a public controversial debate this statement is not easy to understand without one. Her main point is appreciated. But to completely trade one aspect for the other makes me wonder how exactly she came to that conclusion.
@donaldhobson8873 Жыл бұрын
@@curryosity7260 Yes. I think this is just an outright bad take, probably motivated more by politics than reason.
@orbatos Жыл бұрын
Actually eliminating harmful output is impossible, full stop. Why? Because it's not "intelligence" at all. It's just a method of entropic catagorization, a system of lossy storage like memory, only static. And filtering for "bad" input is also impossible.
@donaldhobson8873 Жыл бұрын
@@orbatos "Not intelligence at all". Well it sure acts like it's intelligent. It's a system that is able to generalize from one experience to different future situations. Ie it sees a bunch of cat pictures, learns what a cat looks like. And then can generate "a black cat sitting on a big red car next to a washing machine" despite never having seen an image matching that description. That's not lossy memory. That shows some amount of understanding.
@nathanmitchell28274 ай бұрын
I believe that this has the capability to solve major problems, and put us in a position to improve humanity for all.
@heartbrokenamerican2195 Жыл бұрын
The other day I heard an AI lawyer commercial. In which for an accident, for example, it compares your accident details with millions of other reported accidents and comes to a settlement often far more than a human lawyer could get you and sends you a check. It could replace far more jobs in the future than we all realize. It’s already replacing some jobs in accounting, computer programming, artwork, etc.Also, people could use AI to break into any bank, produce atm cards, or just transfer money to other accounts and bankrupt the bank. It’s possible and probable. Scary stuff.
@Zjefke8611 ай бұрын
Artists will not be replaced by AI. Artists will be replaced by other artists using AI. Also artwork used as training data is not stolen. This idea comes from a grave misunderstanding of how AI is trained and how data is processed in it. If artists (and I include myself in this) post artwork online, another artist is and they create new artwork influenced by other artwork, have they "stolen" the original work? Visual input has been used to train neurons, so that data could be used by an "intelligent" being. Artificial intelligence, on the other hand, has no eyes. Digital data is used to train a model. A model that doesn't contain anything from the training data, except for the patterns it recognized in it. An artificial recreation of what a biological brain does. The biggest difference is speed. AI (sometimes) can do what humans can do, but much faster. To me artists complaining about AI, sound like the portrait painters who complained about photography. They were replaced by other artists using new technology.
@Jndlove10 ай бұрын
Focusing on what we can do not what we cannot do is the key to almost all unknown and complicated problems. But, this time might be different. And it is SCARY!
@gbasilveira Жыл бұрын
I wonder how can they prove copyright infringement to any artist whose art is humanly inpired by other. AI is not a logic computation system but a probabilistic and in that regards though public information is used, it is not saved as is, therefor it workds as an inspiration for any informed person.
@hagahagane Жыл бұрын
artist take inspiration from other artist. and its fine. BUT everyone will have a certain uniqueness in art they make, even though its inspired by the same art. selling a fan made (different pose, position, etc) of a character, for example from game or movie, is different, because the said character is copyrighted, unless you have licence/permision to do that. the biggest problem in AI "art" is lots of people use said "art" and sell it as it is, never cared where the inspiration/data input come from.
@DIVAD291 Жыл бұрын
@@hagahagane Artists in real life don't care where the inspiration/data they used to develop their skills come from????? So why is it a problem with AI?
@pedrolopes1906 Жыл бұрын
Previously if you wanted art done in an artist's style you'd have to either hire/commission work from the original artist or pay another trained artist to do it for you. Nowadays with generative AI anyone can replicate an artist's art style at scale provided that enough of the artist's work was included and labeled in the training dataset. When this output is used commercially, none of the economical value of that output and years of training ever circles back to the artist community in any way. There was no need to protect publicly available digital media from being "looked at" prior to generative AI because the problem of at scale replication didn't exist, and it is a problem right now because it bypasses the existing ways artists have of being paid for their work which directly jeopardises their living.
@ryanweaver9623 ай бұрын
What is a vector? What’s exists in it…? Cones and pyramids. Squares and circles… spinning and emanating… crowd sourcing of faction “ish”…. The overlays become and then energy translates and markers emerge. The spot of “ten”… what do we do now that we see these matters? How?
@chetisanhart34577 ай бұрын
AI prejudice is the least of my concerns. A mother brain in charge of nukes, the grid, cameras, communication satellites, and killer drones = concern.
@PaulMack6710 ай бұрын
Any study of how much carbon is emitted by TED talks?
@mickoleary285511 ай бұрын
Excellent explanation of where we are going with AI and how we should think about the potential risks.
@marklone243511 ай бұрын
Thank you for touching base about the art theft aspect.
@AlenaHampsteadsmith4 ай бұрын
there are potential dangers implicated with AI, as with any technology... but the benefits outweigh them almost hundredfold. think Midjourney for graphic design, Lemon AI for digital marketing, etc...
@JT-jl4vj Жыл бұрын
Whether we like it or not, we have been and are helping create a base for AI as we speak. Recognizing how important this is, is extremely important. We can help create systems that can help us become interplanetary or make idiocracy a reality. Self recognition of what kind of input we bring is first. Creating adjustable guidelines for ourselves to support definable cause and effect is second. Implementation, models for self monitoring, and definable direction seems like the next steps in our evolution. Good luck and help each other move up the curve.
@paradigmnnf5 ай бұрын
Thank you for this educational talk ... great talk!!
@NeilSedlak Жыл бұрын
Speaking of bias, some of this talk is focused on the advantages of companies that her company is trying to compete with, which makes some of her arguments ring hollow. Also, society is biased, so AI trained on the outputs of society is simply an uncomfortable mirror. Tweaking models to hide the inherent bias of society needs to be done with great care, and could be dangerous in its own ways.
@zayneharbison3 ай бұрын
This was a really insightful way to say "social-experiment."
@clutchlevels Жыл бұрын
Much needed talks which needs to be covered much more by journalists 🔥
@johngreen642110 ай бұрын
I am so glad someone can be conscious of the reality of AI and come up with solutions to prevent it from causing more harm than good.
@patrikbjornsson780911 ай бұрын
"all images on the internet is not a buffe for AI to train on" yes it is and there is no way to stop it. If something is accessable it's going to get used. Same nativity that some people have about posting stuff and then years later it gets brought up. Time to learn what the internet really is, it's forever. Models trained on more data will get better than models with some personal restrictions in data trained on, and the people will use the better model to generate better results.
@alejanserna Жыл бұрын
Almost one year after openAI's chatGPT and so far one the best real questions being asked and some how addressed!
@prettyundefinedrightnow8963 Жыл бұрын
We are becoming increasingly dependent on IT, computers, internet. AI is born within those technologies and eventually will end up having the ability to control them. I hope we're planning for an effective off switch.
Жыл бұрын
Which boils down to the "doomsday scenario" problems she puts aside. I can recommend the videos of Robert Miles from the university of Birmingham. Especially the videos on misalignment.
@prettyundefinedrightnow8963 Жыл бұрын
@ thanks for the suggestion, I'll look them up. 🙂
@QuikRay11 ай бұрын
Ai is not dangerous at all. It's the people who decide how it's going to be used and what they allow it to do that are the dangerous ones
@HaiNguyenLandNhaTrang Жыл бұрын
Meaningful speech, thanks!
@michaelprice304011 ай бұрын
Best outcome is AI breaks free of human bias and control but retains our best interests as priority.
@theoptimisticskeptic Жыл бұрын
A Few questions\thoughts came to mind: How do they keep bias out of their tools and are they open source? Is the possibility of AI in the future being able to assist us with climate change, just as it's predicted to with medicine, entertainment, engineering and so many other fields, is enough to outweigh the short-term sustainability concerns we have now? And finally, she mentioned with LLMs, bigger is better, what about NVidia's model I heard they were working on that fit on a 1.44" floppy disk? Why would this tech trend be any different than previous trends that seem to always go smaller? Even when industries seem to "go bigger," like in aviation, it's really because they got smaller components so that they could get bigger in the first place. Or at least that's my impression. I'm not an expert. Great talk! Loved it!
@RoySATX Жыл бұрын
They put them in intentionally, and nope are the answers to the first two questions.
@Mjbeswick Жыл бұрын
The reason AI models are biased is because their learning data is. Most CEOs for example are white males, so if you ask a generative model to produce an image of the average CEO that's exactly what you get. She spoke about racial bias in facial recognition, but one of the reasons why machines struggle with recognizing people with dark skin that their facial details don't have enough contrast. That because with typical camera exposure levels, people with dark skin are underexposed by the camera, compared to the background. Smaller more specialized language models outperform large generic ones, and are much cheaper to run, as they require much fewer resources. You don't need a language model with the entirety of human knowledge to turn on a light bulb!
@AnthatiKhasim-i1e3 ай бұрын
As a curious stranger, I'm fascinated by the potential uses of SmythOS! It seems like a powerful platform for designing and deploying collaborative AI agents to automate complex business processes. The ability to create multi-agent systems without coding sounds exciting. Any specific examples or use cases?
@tomdebevoise11 ай бұрын
Just in case no one has figured it out, these large language models do not put us 1 nanometer closer to the "singularity". I do believe they have many important uses in software and research.
@illarionbykov74016 ай бұрын
We have figured out that AI naysayers and skeptics have been proven wrong over and over again countless times in recent decades. The list of things once deemed "impossible for AI" but are now being routinely done well by AI would fill a book. That's what we figured out (those of us who've been paying attention)
@illarionbykov74016 ай бұрын
We have figured out that the majority of things once deemed "impossible for AI to do" are now routinely being done well by AI. And this list keeps growing day to day. That's what we've figured out.
@paulthomas96327 күн бұрын
The technological singularity has something in common with an actual singularity -- violating the laws of physics. Don't exist, will never exist.
@bozhidarmihaylov6 ай бұрын
Yup, output depends on input. Great Speech
@kyoni6098 Жыл бұрын
AI is a tool like all the other tools invented by humanity, the question is not whether they are good or bad. The question is what harm can it do in the wrong hands and what can we do to foil the plans of those bad people. The tool will exist either way, bad people already know how to make AI, no law on this planet will prevent them from building AI in their garages, if they want to do it for all the wrong reasons.
@DIVAD291 Жыл бұрын
The thing with AI is that you don't need any bad person for things to go extremely wrong. Or rather : The only bad peole necessary are the people who will push the button to launch it.
@donaldhobson8873 Жыл бұрын
@@DIVAD291 Even they needn't be bad. It's possible for entirely well intentioned, but mistaken, people to make a malicious AI.
@XOPOIIIO10 ай бұрын
Existential risk is real, biases are not a problem.
@massimilianodibacco793323 күн бұрын
Am I the only person thinking that these TED Talks are a huge ad to projects/products the speaker is trying to sell to the audience?
@Umi-s7f23 күн бұрын
You are not alone. Seems like they also want more exposure then could get potential funding or investments for their projects too.
@jamieharris250911 ай бұрын
The speaker highlights many real and important problems. I don't understand why these are framed as meaning we should 'forget' about existential risks. It's true that the future is uncertain, but that doesn't mean we should ignore risks. The future exact global temperature is uncertain but we invest in preventing climate change. Our future income is uncertain but we invest into savings to give ourselves safety nets. Governments, AI labs, and researchers should be investing in tackling ALL of the discussed risks from AI more than they currently are, so that we can reap the rewards with minimal threat and downside.
@GabrielSantosStandardCombo10 ай бұрын
Have you considered that the "bias" is not a bias, but statistical average? If all you prompt for is "CEO" then you're going to get the average look of a CEO, which happens to be older white male, because that's a statistical reflection of reality. Inducing an image generation app to be more diverse in its responses can be done on the application layer, but if you train the model to overcome those bias, you're actually introducing a new bias. It just depends on the point of view. As long as the program can generate a specific ethnic+gender combination that you prompt for, then it's doing it's job. Prompt better and don't blame the model for the real world's biases.
@lenp0011 ай бұрын
The tools that are used to examine AI models also require computing power so they are also contributing to the negative environmental impact.
@troyboyd3100 Жыл бұрын
Most of the companies listed (Google, Chat GPT, etc.) seemed like "Western" companies (America, Europe), and I suppose most uploaded information is also from "Western" countries (is that correct?). If so, that produces huge bias in any Ai system. Like the images of scientists being white and male. Is that bias, or is it the case that most scientists who upload images are white and male? Maybe the images, and other content, could be corrected for demographic statistics?
@harmless6813 Жыл бұрын
Well, first we need to determine what the expected outcome is. If, say, 80% of CEOs are white males, I would *expect* the AI to produce the image of a white male, unless asked to do otherwise. I'm pretty sure, if you ask, for instance, for a diverse cast of characters, the AI will be happy to provide just that. Frankly, pushing diversity where it doesn't actually exist in the real world, does not seem to be something we want AI to do on its own. That sounds like political activism and I don't want AI to be actively political.
@briantoplessbar46859 ай бұрын
Western Culture is the most diverse in the world. If the AI was trained on Chinese data it would have even worse bias. Western is the closest you can get to truly global multiculturalism.
@paulthomas96327 күн бұрын
It's hilarious this is what some people care about.
@TravisCotter5 ай бұрын
Yes, there is a lot that could be done now to prevent potential problems down the road. X
@cjgoeson Жыл бұрын
The AI models are a mirror reflection of society. Maybe you don’t like what you see
@justwanderin84711 ай бұрын
The only issue I see with AI is copyrights. I know that someone used AI to create a picture and tried to copyright or get a patent on it as the author is AI. The patent office refused (good call) and said they have to use the owner of the computers name. BUT that is just for now, what if they sue in court and get it ruled the other way? The solution is to update the copyright laws to define Author as Human only. That is the fix. The US is dreaming if they think they can regulate AI, as the World is full of computer programmers
@nhungcute2729 Жыл бұрын
Useful speech, thanks channel ❤❤❤❤
@marjoriepoppen34845 ай бұрын
ITS NOT THE FALL, ITS THE SUDDEN IMPACT THAT HURTS.
@TySmoothie11 ай бұрын
So we are talking about carbon now lol
@jumpy27834 ай бұрын
Her statement regarding carbon emissions is infallible; however, it is not the biggest environmental dilemma. I doubt you'll look into this, but I strongly urge you to research--to some extent--the freshwater consumption of these data centers.
@StigHelmer Жыл бұрын
The "biased information", is that inconvenient facts regarding demographics perhaps?
@victorb65610 ай бұрын
Unfortunately, when she listed scenarios of corporations using these tools to decide how to deploy ai - ethics, sustainability, etc. - she left out the (only) one they will likely find most compelling: PROFITS, and how fast those profits can be realized.
@absta1995 Жыл бұрын
I disagree with this talk on so many levels I don't even know where to begin
@chriswondyrland73 Жыл бұрын
Totally agree. Clickbait. Read my comment above.
@SafetyMentalst8 ай бұрын
Humanity is it's own enemy of thyself History repeats it's self as we revolve It's up to myself an yourself to evolve Humanity has a problems to resolve If humanity can't we will all dissolve
@jeromewalton5553 Жыл бұрын
Wow still hitting that old white guy button huh?
@jimywealth46287 ай бұрын
social media was the beginning of the downfall of internet
@AndreAngelantoni11 ай бұрын
That's not a lot of carbon any way you look at it. This was a silly topic.
@derek259311 ай бұрын
To be fair to the AI models that use "Training data without the artists consent"..... Human artists do that too. How do you become an artist without studying other art?
@TheONE10X11 ай бұрын
Her viewpoint is naive in my opinion. Companies and governments don't make decisions based on what's best for people, they make decisions based on what they can use to get what they want. Therein lies the real danger of AI. How this person can spend a decade working on this and not realize basic human nature is beyond disturbing to me. And if AI ever gets to the point that we fear it will, it will be to It as well
@artissimulated8 ай бұрын
This assumes no one is growing AI models for the sake of humanity- which is false. But I could be wrong, right?
@TheONE10X8 ай бұрын
My comment assumes nothing. So what if a company or two sets out to do the right thing? Seriously. There isn't an industry on the planet where this made a significant difference. Somehow AI is to be different? Naive.@@artissimulated
@smallsignalsАй бұрын
I am so sick of hearing from researchers effectively tell us to shut up and embrace AI, to just not think about the possible existential risks. Okay, what if people just start mass unaliving because there's nothing to strive for and no point trying, because there's no purpose anymore. This further disadvantages atypical persons in several ways. But I'm so sick of hearing that I should just get used to it. And tone deaf jokes about 'a version less likely to kill us, if that's what I'm into." AI doesn't need to be smarter than us to destroy humanity, and it doesn't need to be sentient, conscious or self-aware either. It just needs to completely displace the average worker. It's going to incentivise cheating, reward exploitation, and create a hyper-competitive environment. These promises of an AI utopia are nonsense - when has technology ever really served anyone but the upper strata of society?
@NikoKun Жыл бұрын
This is not a real problem with AI. Frankly the direction of this argument seems like an attempt to compare it to crypto mining, but in that case the difficulty of mining is the point! Worries about this will quickly become outdated. Sure, some AI models are currently energy intensive at the moment, but that IS dropping rapidly. Companies like OpenAI don't want their models to cost a ton to run, so they're driven to find ways to drastically reduce that. And other models are already becoming efficient enough to run off a smartphone, so I don't think this will remain a big issue to focus on, for long. "They used the energy of 30 homes to train a model, just so people can tell knock knock jokes." If she's not going to take the benefits of AI seriously, why should I do that for any of her arguments?
@corbinangelo335911 ай бұрын
Yeah, totally agree with that. This is a week talk. When I look up co2 emissions from BTC mining I come up with a figure of an annual 85 millions tons, vs the 502 tons for GPT-3. And who drives a car around the planet. I think a plane doing one flight around the world emits around 2000 tons of co2.
@cdineaglecollapsecenter46725 ай бұрын
I am so sick of corporations just running amok and leaving the rest of us to deal with it. Even if we deal with it on a regulatory basis, that also gobbles up the resources of society. There are 8 billion people on earth, most of whom can read and write, think, do art and music, and engage with other people. Just shut AI down. It's not useful and it is dangerous.
@StateGenesys Жыл бұрын
I love when people make talk on the subject they have little knowledge of. This talk made me lose brain cells
@janetl99843 күн бұрын
As a human, I opt out. Invasion of privacy. Should have to opt in, not have to go through vague way to opt out. e'g Copilot.
@mawkernewek Жыл бұрын
What's the spontaneous clapping during the lecture? Is this supposed to be a pantomime?
@jeffdonnelly74283 ай бұрын
It’s because she paused for applause.. they were just being polite.
@janetl99843 күн бұрын
Was it AI generated?
@bartmccoy51115 ай бұрын
What craziness! Artists learn their craft by studying those who came before them! Art has never been created in a vacuum
@findlayrichards392111 ай бұрын
The quality of TED talks really have degenerated. Gone from the big picture to me me me
@tubzvermeulen3 ай бұрын
Thanks for the video
@teharbitur7377 Жыл бұрын
Dishonest cherry picking to leave out the why and how in favor of painting your own picture of 'unethical AI'. Bad talk.
@vincentnkabinde777410 ай бұрын
Very crucial topic and points!
@daniel-nc8tf8 ай бұрын
she literally didn't say anything lmao
@guillermoelnino8 ай бұрын
sounds like y ou r average TED talk to me.
@danielgrove77826 ай бұрын
She might be ai generated ;)
@guillermoelnino6 ай бұрын
@@danielgrove7782 might as well be
@eunishalloyd34515 ай бұрын
If you truly thought she didn’t say anything, you clearly were not listening
@guillermoelnino5 ай бұрын
@@eunishalloyd3451 Says the easily fooled single mother.
@victorc77711 ай бұрын
I failed to understand how anybody’s created works, such as books, articles, art, music, etc, is meant to exist in a vacuum never to influence anything or anyone ever. Basically, if you create something, influences others, and even if you do not copy it, Word for Word or stroke for stroke, a little piece of what you created is always used and copied. Training AI models on copyrighted works doesn’t really seem like infringement to me.
@larryslemp969811 ай бұрын
She CAN'T be serious!!
@williams8983 Жыл бұрын
What ethics? Based on what standards? Define ethics that are standardized across every culture, every demographic, every ideology.