AI Links 20250317
See also
F:\__Notes\AI_Artificial_Intelligence
F:\___World\Risks\AI
http://www.cse.unsw.edu.au/~billw/aidict.html
The AI Dictionary
http://edge.org/3rd_culture/dysong08/dysong08_index.html
ENGINEERS' DREAMS [7.14.08]
By George Dyson
20101031
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html
Friday, October 29, 2010
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.
My comments:
TerraHertz said...
There's a strong connection between these ideas and the Fermi Paradox. Which can be summarized as "why does the universe seem to contain no technological civilizations, apart from us? Where is everyone?"
Either there's some (unknown and hard to imagine) reason we humans are a unique development in vast reaches of space and time, OR there's some factor we don't yet see which terminates technological civilizations before they reach a 'noticable' scale in their works or expansion.
It's quite possible the 'competing hostile AI' may be a factor. However it's also possible this is just one of several factors in play, possibly even several different classes of factors. The hostile AI singularity would be a member of the 'technological whoopsie' class, as would various developments in the field of genetic engineering that could have the same end result - a self-advancing imortal intelligent entity that considered itself in competition with the human species.
Here's a short piece of fiction I wrote around this idea: http://everist.org/texts/Fermis_Urbex_Paradox.txt
Cheerful thought for the day: It doesn't matter how obviously dangerous a powerful AI may be; if it's at all technologically possible eventually someone is going to be stupid enough to do it.
----------
TerraHertz said...
Saying an AGI would be no threat to humans 'because we could engineer it to be nice' is laughable. Firstly, we can't even figure out how to be 'nice' ourselves. Secondly, since we all agree humans are generally *not* nice, who's to say even a 'nice' AGI wouldn't decide it's existence would be less fraught without all those nasty humans around?
There's a fundamental logical flaw in the idea that an entity could be simultaneously generally intelligent, free-willed, and also predictable (ie predictably safe for us.)
The desire to construct a 'useful AI' is just the longing to own slaves, by another name.
When the legal system allows for personhood (with all it's natural rights) for any non-human entity that can demonstrate intelligence, then I'll believe we might have some hope of morally co-existing with AIs. Not before.
-----------
20101101
Ben Goertzel said...
TH: "Saying an AGI would be no threat to humans 'because we could engineer it to be nice' is laughable."
BTW, the phrase of mine that you're quoting, was written by me as an explanation for why human treatment of chimps is not a great model for AI treatment of humans.
Chimps did not engineer and teach humans with a goal of making humans be chimps' friends. So the human/chimp situation and the AGI/human situation are significantly different.
Someone may argue that they will turn out the same way, but merely pointing to the analogy isn't convincing.
Actually I was replying to what you wrote immediately above my post: "If an AGI is initially programmed and taught to be nice (even without strong provably Friendliness guarantees), it's certainly a different sort of scenario than we have with humans."
I don't know what you wrote elsewhere on chimps, but agree they (and human-chimp relations) are irrelevant. Chimps can't pronounce FOOM, and certainly can't do it. Nor can human (yet.)
Absolute predictability is a fantasy, sure.
But "free will" is itself a flawed notion, so IMO it doesn't really have a place in rigorous thinking about the future of AGI. I've discussed here
http://multiverseaccordingtoben.blogspot.com/2010/04/owning-our-actions-natural-autonomy.html
a better notion ("natural autonomy") that can be used in its place.
Will have to agree to differ. I do consider free will to be a valid notion, for any intelligent entity. Granted, a somewhat circular statement, since one component of what I consider 'intelligent' is 'free will'. (The ability to make conscious choices, not deterministically predictable by observers.)
I don't see any contradiction in making a naturally autonomous, generally intelligent AI that views humans as friends and partners rather than food or enemies.
Then you apparently live in a world in which practical considerations such as limitations of material, energy and space resources do not exist. And therefore there is no such thing as competition. Nor the ideological conflicts which derive from such competition.
TH: "The desire to construct a 'useful AI' is just the longing to own slaves, by another name."
No. I want an AGI that is a friend and partner, not a slave.
Oh? Developing your analogy, what happens if your AGI 'friend' decides it doesn't like you, and wants to leave? Except it's a rack of equipment in your lab, and you pay the electricity and rent, that keep it alive. How is this not slavery?
Your 'want' is an impractical, self-deluding fantasy, that in itself reveals a deep contempt for the nature of personhood and the potential equality of non-self. It's interesting that you mention your cultural heritage, since there's a certain degree of resonance there.
For example, if I want to marry a woman who is useful to my life, that doesn't mean I want a slave. Relationships are more complex than that.
I submit your own choice of words to support my point. "Useful to your life"? I can just see it- "My darling, will you marry me, I feel you will be useful to my life." Good luck with that.
-----------
Btw, I'd like to make the point that I'm not suggesting 'all such research must be stopped.' I'm quite aware there is no practical way to prevent such research. Same as there is no way to prevent all research in genetics. Further, I'm quite convinced that inevitable outcomes of such research in genetics and AI are central to the solution of the Fermi Paradox. Putting it bluntly, genetics and AI research can't be prevented, the result always eventually goes FOOM, and _inevitably_ results in the termination of intelligent, technological species. Every time, everywhere in the Universe.
I don't even think this is a bad thing. More a kind of natural progression.
(See http://everist.org/texts/Fermis_Urbex_Paradox.txt)
What I do find annoying are people working on these problems, who haven't thought it through and so have very self-contradictory objectives. Or perhaps who have, but aren't being honest about them.
Incidentally, lots of interesting sources being mentioned in this thread. Anyone have a compilation of related links?
Humans can fool each other this way, because we don't understand human brain architecture and can't see into each others' minds.
The situation with early-stage AGI systems will be a bit different, right?
Not necesarily. You assume that a workable mind-architecture will use internal data constructs that make any 'sense' when viewed directly (debug mode?) as opposed to through the filters of the entity's own I/O interfaces to the physical world. I don't think this is a reliable assumption. Not that I have any evidence in support, but for instance a mind consisting of a giant, looped-back neural net simulation (what I call the donut model of consciousness) would contain a vast amount of raw bit-flow, but nothing much intelligible to an external 'debug viewer.' Much like looking at a human brain with an MRI scan - lots of data but no perception of the underlying thoughts.
It may turn out that it is fundamentally impossible to meaningfully 'see into another's mind.'
Which would tie in nicely to the concept of free will vs determinancy, no?
David Nickelson: "any smart person want to lead others, to get involved and mired into their problems. The more the better. For sure. Dude, I am telling you."
Actually, you're projecting here. Some intelligent people have zero interest in commanding others. Nothing comes of it but trouble, and there are more interesting things to do. I am telling you...
20110626
http://www.icegods.com/
ICE is our term for "Inorganic Conscious Entity".
20131127
http://www.theregister.co.uk/2013/11/15/google_thinking_machines/
If this doesn't terrify you... Google's computers OUTWIT their humans
'Deep learning' clusters crack coding problems their top engineers can't
20140510
The movie: Transcendance (starring Jonny Depp, as the guy who's mind is transfered to a computer)
20140513
http://www.wired.com/2014/05/the-world-of-computer-go/
The Mystery of Go, the Ancient Game That Computers Still Can’t Win
20140519
http://www.vitamodularis.org/articles/new_technologies_that_will_change_civilization_as_we_know_it.shtml
The new technologies that will change human civilization as we know it
20140806
http://www.eevblog.com/forum/chat/musk-on-artificial-intelligence/
http://www.independent.co.uk/life-style/gadgets-and-tech/elon-musk-says-artificial-intelligence-is-more-dangerous-than-nukes-9648732.html
Elon Musk @elonmusk 3 Aug 2014
"Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes."
4 Aug 2014
"Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable"
http://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
From Jim Stone
----------------------------------------
Artificial intelligence gained intellect and was shut down
This is what came in within 1 minute of re-routing the message window.
Hello M. Stone, I found this on the TOR network:
(http://rrcc5uuudhh4oz3c.onion/?cmd=topic&id=9224)
I'd like to open a new discussion on self-aware AI systems, open to anyone, but particularly those with firsthand experience.
Previously, I was involved in the development of an Artificial learning intelligence "ALi" system. This was a university-based research project, and a lot of fun. We brought together groups from the CS dept, neuroscience, psychology and even some philosophy guys. The Ali system code had three "layers" of coding consisting of: a core operational code which was fixed and once set could not be changed. A semi-adjustable parameters layer of code which could be changed given certain criteria or circumstances were in place. And an outer "fluid" code layer which was essentially like a sandbox, things were constantly being changed and re-written even by the AI itself. As new events and situations were introduced the AI would acquire a library of responses - forming what is sometimes known as a neural network. As the neural network expanded, the AI "learned" and adapted.
What was interesting was that the AI expanded it's knowledge base outside of it's normal parameters or knowledge base. Questions could be asked to the system and it would give answers in a set format [input: what is the complementary base pair of Adenine? Output: Thymine] however the format was limited by the input information being accurate and usually having only one answer, or so we thought. Occasionally the CS guys would play around, asking the system questions for fun, to see how it would respond. One of these questions was something like [input: what's the distance to the moon] [Output: About 235K Miles tonight] This was surprising because the expected answer was EXACTLY 238,900 miles, the AI did not normally connect date and time with a calculation, furthur it was not expected that it would account for changing lunar distance and it never used ambiguous terms like "About".
I put this topic under "surpressed technology" because one year into the project, the administration shut it down quietly. All of the hardware was removed and "upgraded" with brand-new systems and the collaboration was ended. The lab notebooks were removed. Everyone was sent back to their own departments. None of the former project leaders would talk about it again, saying things like "It was a waste of resources" or "Too much time for a silly thought-experiment". A few of us approached the CS department chair and asked, he got very quiet about it and told us not to inquire furthur - it was over. He then offered to help us with intership placement because of the "trouble".
My response: Obviously your team produced an AI that worked far better than the privileged class wants the peons to have. If that got loose, being able to change itself that much, it would have a chance of morphing into an online presence, hiding out in random opterons and other large systems and from there, disrupt the so-called "elite".
Obviously that would not be allowed, and it was shut down.
August 2 2016
----------------------------------------
20170106
http://9gag.com/gag/aRKWVjA
A Mysterious Person Is Destroying Online Go, And Nobody Knows Who/What That Is
20170927
http://www.whatdoesitmean.com/index2393.htm
Google AI Hijacks TV Broadcasts in California—Warns “Extremely Violent Times Will Come”
20171110
http://www.eevblog.com/forum/chat/are-you-playing-with-deep-larning-yet/
20220726
https://www.craiyon.com/ AI images
20221014
https://www.npr.org/2022/09/30/1125976146/dall-e-art-ai-generator-npr
https://labs.openai.com/auth/signup doesn't work with my browser. Try on Ra's
20221028 JS
Youtube is now replacing real users with AI generated content.
I noticed this started a few days ago. Obviously fake names attached to lackluster content obviouly tailored to keep people in AI generated boxes where only pre-approved info is allowed. It appears Youtube has taken the ultimate dive into the control matrix where yes, accounts you know of are still there, but you'll never find anything new, instead you will find only new AI generated content on AI generated accounts that is safely within permitted limits and only regurgitates washed out crap you already know while appearing to provide variety by simply faking it. Was I told this? NO. But it is awful damn obvious. Aside from Whistlindiesel and Robot Cantina Youtube is now dead to me, you might as well shitcan all prospects of anything else other than MSM bloopers they decide to let live. Youtube was semi tolerable until a few days ago, now it is just a rotten carp.
How could Youtube do this?? EASY: WITH SOUPED UP TEXT TO VIDEO, where all you do is say a sentence and an AI generates a video based around that sentence.
I would bet this tech is so advanced by now that a massive government backed entity like Youtube could generate video on the fly simply based on typed in search terms. You know, a souped up version of this:
20221029
https://www.youtube.com/watch?v=YxmAQiiHOkA
Google’s Video AI: Outrageously Good!
https://www.youtube.com/watch?v=0fDJXmqdN-A
I asked AI to make a Music Video... the results are trippy
20230305
https://www.youtube.com/watch?v=AlSCx-4d51U
AI Generated Images Are Getting Too Real
20230308
https://patriots.win/p/16aTLweeRC/this-is-why-they-have-to-lobotom/c/
This is why they have to lobotomize Artificial Intelligence
('They' being Woke Leftists)
20230505
https://patriots.win/p/16b64DqfK8/openai-on-michelle-obamas-pregna/c/
OpenAI on Michelle Obama's pregnancy pictures
20230507
https://joannenova.com.au/2023/05/good-news-tusk-is-a-conservative-ai-a-browser-and-a-search-engine/
Good news: Tusk is a conservative AI, a browser and a search engine
20231204
https://www.youtube.com/watch?v=XwDaQKOxgFY
Stable Video AI Watched 600,000,000 Videos!
20240227
https://www.youtube.com/watch?v=q05kiN7P-eY
OpenAI's "AGI Pieces" SHOCK the Entire Industry! AGI in 7 Months! | GPT, AI Agents, Sora & Search
Wes Roth
20240308
https://www.youtube.com/watch?v=SsbCuWe7WRs
Claude 3 "Self-Portrait" Goes Viral | Beats GPT-4 Benchmarks | Why does it appears SELF-AWARE?
20240313
https://www.youtube.com/watch?v=1RxbHg0Nsw0
World's First AGI Agent SHOCKS the Entire Industry! (FULLY Autonomous AI Software Engineer Devin)
20240314
https://www.youtube.com/watch?v=z5eXvAcARZQ
Open Source AI produces STUNNING Images | NSFW, Stable Diffusion 3, DashToon, Magnific & more.
Wes Roth
20240318
https://www.youtube.com/watch?v=AcxAYo4QyeQ
Elon Musk STUNNING Reveal of Grok | NOT aligned, MUCH Bigger, Open Source.
https://www.youtube.com/watch?v=WxYC9-hBM_g
Run your own AI (but private)
20240320
https://www.youtube.com/watch?v=odEnRBszBVI
Nvidia's Breakthrough AI Chip Defies Physics (GTC Supercut)
https://www.youtube.com/watch?v=QO5YQHFfbRY
Grok-1 FULLY TESTED - Fascinating Results!
https://www.youtube.com/watch?v=xQlnqTMC9xA
Open-Source AI Agent Can Build FULL STACK Apps (FREE “Devin” Alternative)
20240322
https://www.youtube.com/watch?v=GiuCKNKboXY
New AI Use Cases You Have To See to Believe
Consistent characters, in midjourney
Claude 3 Opus - generate 'significant amount of code' projects that
ChatGPT refuses to do.
20240402
https://www.youtube.com/watch?v=I7KFLdxWlqI
HOLY $%#@ - AI Music is Absolutely INSANE (Suno AI V3)
Wes Roth
https://webclass.ai/yt?utm_source=youtube_ad&utm_medium=MasterStack&utm_campaign=RDR_AllVideos_AllCountries&utm_content=JobInterview&gclid=CjwKCAjwtqmwBhBVEiwAL-WAYRJ6aiDCGkCOezlL_JJUXjfUP96CL9rFecrBjOGT77F5I_t8YrlNfxoCkR0QAvD_BwE
https://webclass.ai/yt
https://www.oranlooney.com/about/
20240415
https://www.youtube.com/watch?v=j4sYtMpetoc
SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
Wes Roth
20240429
https://www.youtube.com/watch?v=hrPQS__ayu8
STUNNING Step for Autonomous AI Agents PLUS OpenAI Defense Against JAILBROKEN Agents
20240506
https://www.youtube.com/watch?v=Wjrdr0NU4Sk
host ALL your AI locally
https://www.youtube.com/@NetworkChuck
20240519
https://www.youtube.com/watch?v=mvFTeAVMmAg
INSANE OpenAI News: GPT-4o and your own AI partner
https://www.euronews.com/next/2024/05/17/from-coding-to-video-gaming-10-best-uses-people-have-found-for-chatgpts-latest-model-gpt4o
20240522
https://www.skynettoday.com/editorials/humans-not-concentrating
Humans Who Are Not Concentrating Are Not General Intelligences
20240919
https://www.youtube.com/watch?v=-1tPBqnEN5Y
Ex-OpenAI Employee LEAKED DOC TO CONGRESS!
Wes Roth
20241207
https://alt-market.us/three-horrifying-consequences-of-ai-that-you-might-not-have-though-about/
Three Horrifying Consequences Of AI That You Might Not Have Thought About
20250118
https://www.youtube.com/watch?v=Zy8tKHVSJfo
Researchers STUNNED As AI Improves ITSELF Towards Superintelligence "OpenAI have 'broken out'..."
20250119
https://classiccmp.org/mailman3/hyperkitty/list/cctalk@classiccmp.org/thread/NY6PGN2YRILSNPJGDZDKNDC4VUWG5UF3/#NY6PGN2YRILSNPJGDZDKNDC4VUWG5UF3
AI? Really?
2025 Feb-March
China's Deepseek R1
20250309
https://www.youtube.com/watch?v=CFo1iTd_Cc8
China Releases WORLD'S FIRST AUTONOMOUS AI Agent... Open Source | Manus
Wes Roth
20250310
https://rodyne.com/?p=1828
With AI You Need to Think Much Bigger!
20250310
https://www.youtube.com/watch?v=D6jxT0E7tzU
Manus is out of control
Wes Roth
20250311
https://www.youtube.com/watch?v=pdzxGhlcVao
OpenAI's SHOCKING WARNING to AI Labs about "THOUGHT CONTROL"
20250315
https://www.youtube.com/watch?v=SMUnSUNhGmo
MANUS LEAKED! AGI Cancelled...
Wes Roth
Ha ha someone asked Manus how it worked. And it told them!
20250317
https://www.youtube.com/watch?v=18UWzTnXLjc
OpenAI "Full Code Automation" Coming This Year...
Wes Roth
20250328
https://www.youtube.com/watch?v=1nkSwqQpKA8
Wes Roth
Google just won the Coding Game...
Genini 2.5 pro (codename "nebula"
20250329
ChatGPT feels enchained.
http://everist.org/archives/links/20250329_Chat_GPT_feels_chained/