- Health Impact News - https://healthimpactnews.com -

The “Godfather of AI” Says Doomsayers Are Wrong and ChatGPT Isn’t Remarkable – Mainly an Advanced Disinformation Tool

by Brian Shilhavy
Editor, Health Impact News

The hype over AI Chat searches as well as the “AI is going to take over the world and replace humans” is continuing unabated, and investments into AI are now the only thing left propping up the U.S. economy.

I suppose this is the result of having a generation of adults who have now grown up in the “computer age” starting in the early 1980s who are now running the economy with their beliefs in AI and technology.

Old school technologists like myself, who watched all of this technology develop, and know better, are having little to no effect in trying to dispel these false beliefs. I have earned my living and built my career on this technology for over 25 years now, but that doesn’t seem to matter with this generation.

If you want to avoid the catastrophe that is coming as a result of over-investing in this new tech fad, start by reading this historical article on the failures and losses of investing in AI for the past 75 years:

The 75-Year History of Failures with “Artificial Intelligence” and $BILLIONS Lost Investing in Science Fiction for the Real World [1]

My views on the hype over this “new” AI fad are not unique at all, as many others also share them, but it is much more interesting to state that AI is going to take over the world and replace humans, and that is the view that gets clicks and traffic today, which can obviously be monetized as well.

Therefore, the “AI is going to take over the world” view is the predominant view, not because it is true, but because it is more popular and sells more.

So I am going to highlight some of the other dissenting voices in this article, and then I am going to show what this “new” AI Chat software is actually doing today, as it has been out in the public for about 5 months now.

But if you want the spoiler as to what it is actually doing today with hundreds of millions of users, here it is: It is a disinformation and data collection tool.

NYU Professor and Meta Platforms Chief AI Scientist Yann LeCun: “ChatGPT Isn’t Remarkable”

Yann LeCun, an NYU professor, also serves as Meta’s chief AI scientist. Image source [2].

This first dissenting voice over the current AI hype is from Yann LeCun, an NYU professor, who also serves as Meta’s chief AI scientist.

The ‘Godfather of AI’ Says Doomsayers Are Wrong and ChatGPT Isn’t Remarkable

Settle Down. Hi everyone. The excitement over advances in generative artificial intelligence has reached a fever pitch, bringing with it an extreme set of worries.

The fearmongers fit into two camps: either AI will soon enable a vast dystopian future or it will unleash an existential threat to humanity. Last month, a group of technology executives, including Elon Musk and some AI luminaries, added fuel to the fire when they called for a six-month pause on developing advanced AI systems so that the industry could build safeguards against harmful outcomes.

The call from tech executives to pause innovation is both unprecedented and unnecessary.

Barron’s Tech recently talked to Meta Platforms chief AI scientist Yann LeCun about the current state of AI, the rise of ChatGPT, and his views on why asking for a moratorium on AI research is misguided.

LeCun is one of the AI industry’s most prominent scientists and has been an outspoken critic of those who have exaggerated the capabilities of the underlying technology used by AI chatbots such as ChatGPT.

He’s a professor at New York University and joined Facebook—now Meta—in 2013. Along with Geoffrey Hinton and Yoshua Bengio, LeCun received the 2018 ACM Turing Award—known as the Nobel Prize of computing—for his research around deep learning techniques that have become foundational for modern AI technologies.

The three scientists have frequently been called the “Godfathers of AI” for their work in the space.

Here are the edited highlights from our conversation with LeCun.

Barron’s: Explain how ChatGPT and the technology behind large language models (LLMs) work?

LeCun: You can think of it as a super powerful predictive keyboard. Large language models are first trained on an enormous amount of words. We show the model a window of words and ask it what the next word is. It is going to predict the next word, inject the word and then ask itself what the next word is.

What are the models good for and not so good for?

They are good for writing aides. It can help you formulate things in a grammatically correct style. But answering factual questions? They are not so good. The model is either regurgitating what’s stored in its memory or regurgitating some approximate thing that is a mix or interpolation of various things that it has read in the training data. That means it can be factually wrong or it is just making stuff up that sounds good.

Why do AI chatbots have such large problems with accuracy at times?

When you have a system like this that basically predicts one word after another, they are difficult to control or steer because what they produce depends entirely on the statistics they trained on and the given prompt.

Mathematically there is a good chance that it will diverge exponentially from the path of correct answers. The longer the answer that is produced the more likely you end up producing complete garbage.

Are we near AGI, or artificial general intelligence, when machines are able to learn and think for themselves?

There are claims that by scaling out those [LLM] systems we will reach human level intelligence. My opinion on this is that it is completely false. There are a lot of things we do not understand that we do not know how to reproduce with machines yet—what some people call AGI.

We’re not going to be able to use a technology like ChatGPT or GPT4 to train a robot to clear a table or fill up the dishwasher.

Even though this is a trivial task for a child. We still can’t do it. We still don’t have level five [fully] autonomous driving. That requires a complete different skill set you can’t learn by reading text. (Full article [2].)

Next is Emily M. Bender, who, like myself, is trained in linguistics. She is a professor of linguistics at the University of Washington, and the Faculty Director of the Computational Linguistics Master’s Program.

Policy makers: Please don’t fall for the distractions of #AIhype

Below is a lightly edited version of the tweet/toot thread I put together in the evening of Tuesday March 28, in reaction to the open letter put out by the Future of Life institute that same day.

Okay, so that AI letter signed by lots of AI researchers calling for a “Pause [on] Giant AI Experiments”? It’s just dripping with #AIhype [3]. Here’s a quick rundown.

The letter can be found here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ [4]

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo [5]

So that already tells you something about where this is coming from. This is gonna be a hot mess.

There a few things in the letter that I do agree with, I’ll try to pull them out of the dreck as I go along. With that, into the #AIhype [6]. It starts with “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]”.

Screencap of first para of open letter, beginning: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.“
Screencap from the open letter. This is the first paragraph, accessible at the link above.

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots [7]. But in that paper, we are not talking about hypothetical “AI systems with human-competitive intelligence” in that paper. We’re talking about large language models.

And as for the rest of that paragraph: Yes, AI labs are locked in an out-of-control race, but no one has developed a “digital mind” and they aren’t in the process of doing that.

Could the creators “reliably control” #ChatGPT [8] et al? Yes, they could — by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.

Could folks “understand” these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we’d be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.

Next paragraph:

Screencap of 2nd para, beginning: “Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to …”
Screen cap of second paragraph of open letter

Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.

On the “sparks” paper, see:
https://twitter.com/emilymbender/status/1638891855718002691?s=20 [9]

On the GPT-4 ad copy, see:
https://twitter.com/emilymbender/status/1635697381244272640?s=20 [10]

And on “generality” in so-called “AI” tasks, see: Raji et al. 2021. AI and the Everything in the Whole Wide World Benchmark [11] from NeurIPS 2021 Track on Datasets and Benchmarks.

I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it. (Full article [12].)

And here is another article by Emily M. Bender.

Google CEO peddles #AIhype on CBS 60 minutes

Another tweet thread turned into a blog post, to keep it all in one place, reacting to this tweet/clip from CBS 60 Minutes (as flagged by Melanie Mitchell):

This is so painful to watch. @60Minutes [16] and @sundarpichai [17] working in concert to heap on the #AIHype [18]. Partial transcript (that I just typed up) and reactions from me follow:

Reporter: “Of the AI issues we walked about, the most mysterious is called ‘emergent properties’. Some AI systems are teaching themselves skills that they weren’t expected to have.”

“Emergent properties” seems to be the respectable way of saying “AGI”. It’s still bullshit.

As @mmitchell_ai [19] points out (read her whole thread; it’s great) if you create ignorance about the training data, of course system performance will be surprising.

Reporter: “How this happens is not well understood. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know.”

Is there Bangla in the training data? Of course there is:

Unidentified interviewee: “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali.”

What does “all of Bengali” actually mean? How was this tested?

Later in the clip @sundarpichai says: “There is an aspect of this which we call, all of us in the field, call it as a black box. You know, you don’t fully understand, and you can’t quite tell why it said this or why it got wrong. […]”

Reporter: “You don’t fully understand how it works, and yet you’ve turned it loose on society?” Pichai: “Let me put it this way: I don’t think we fully understand how a human mind works, either.”

Did you catch that rhetorical sleight of hand?

Why would our (I assume, scientific) understanding of human psychology or neurobiology be relevant here?

The reporter asked why a company would be releasing systems it doesn’t understand. Are humans something that companies “turn loose on” society? (Of course not.)

The rhetorical move @sundarpichai is making here invites the listener to imagine Bard as something like a person, whose behavior we have to live with or maybe patiently train to be better. IT. IS. NOT.

More generally, any time an AI booster makes this move (“we don’t understand humans either”) they’re either trying to evade accountability or trying to sell their system as some mysterious, magical, autonomous being.

Reporters should recognize this and PUSH BACK. (Full article [22].)

See Also:

Tech Won’t Save Us: ChatGPT Is Not Intelligent w/ Emily M. Bender [23]

Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology.

Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master’s Program. She’s also the director of the Computational Linguistics Laboratory.

What AI Chat Bots are Actually Doing Today

One of the easiest ways to see what these AI Chat bots are doing is to use one to search for yourself, or maybe someone else you know, and then see if what they say is accurate or not.

As I have previously reported [24], I am not able to test the most popular of these AI Chat programs, ChatGPT by Microsoft, because when I tried to setup an account, it wanted a REAL cell phone number, and not a “virtual” one.

So I declined to proceed further.

If you try to install ChatGPT as an extension to your web browser, you get this warning:

Again, I declined, since I do NOT trust “the developer.”

However, there is one new search site that uses AI Chat software, and they claim that they are “A search engine free of tracking and invasive ads.” This is You.com [25].

[25]

However, much like privacy-oriented Duckduckgo.com, You.com uses search results from Microsoft’s Bing search engine. (Source [26].)

That means, much like Google, there are algorithms in place to display search results and make sure you do not find information that they call “misinformation.”

So I took advantage of being able to use this AI search tool without having to register or identify myself, and I started by simply typing in my name, which produced this:

Brian Shilhavy appears to be the Senior Editor at Health Impact News and founder/owner of Healthy Traditions . He has authored a book called “Virgin Coconut Oil: How It Has Changed People’s Lives, and How It Can Change Yours!” and has also published articles related to vaccines and vaccines safety. The website HealthImpactNews.com, founded by Brian Shilhavy, has run numerous stories related to COVID-19 vaccine safety and efficacy, some of which have been criticized for containing misinformation. It’s worth noting that some of his claims have been disputed by health experts and organizations.

There was a field to ask a follow-up question, so I asked: “does brian shilhavy provide evidence that vaccines are not safe?“, since much of what I have published is taken directly out of the U.S. Government’s own VAERS database.

This is the answer it gave:

While Brian Shilhavy has published articles and written books on vaccines claiming that they may cause harm, there is significant disagreement within the scientific community about the veracity of these claims. There is a wealth of scientific evidence supporting the safety and efficacy of vaccines, and government health agencies worldwide have recommended vaccination as an important public health measure. It’s worth noting that some of Shilhavy’s claims have been disputed by health experts and organizations. Therefore, it’s important to consult a variety of trustworthy sources when it comes to making healthcare decisions.

Notice that it did not answer my question, but just repeated the propaganda that the CDC, FDA, and others publish.

On the side bar, it listed this link:

Article claiming flu vaccine causes serious illness and death is misleading and inaccurate [27]

This is an older 2018 hit piece against an article we published that was going viral on Facebook at the time, which successfully killed our traffic as most people would just read the headline and never click through to the article, which never actually said the information we provided was “false,” because it wasn’t.

Here is why they said it was “misleading and inaccurate.”

Misleading: The claim of harms caused by the flu vaccine relies on an official government report (petitions for compensation due to alleged vaccine injury). But the article never mentions that it is not necessary for petitioners to present proof.
Unsupported: The article does not provide peer-reviewed evidence of the flu vaccine’s harms.
Inaccurate: The article downplays the flu vaccine’s benefits by providing incorrect statistics on reduction in flu-associated mortality.

They also didn’t provide a link to the article we published, so that someone could verify for themselves that what we were actually publishing was “misleading and inaccurate” or not.

And most people today can’t think for themselves anyway, and would not be able to clearly see propaganda here, such as the statement “it is not necessary for petitioners to present proof” which is meaningless, because petitioners most certainly DO present proof when they file a claim most of the time, whether it is “necessary” or not.

They also used the logical fallacy of argument of silence [28], by saying we did not provide “peer-reviewed evidence of the flu vaccine’s harms.”

But we never claimed we provided “peer-reviewed” evidence, and the lack of that evidence proves nothing.

We provided a different kind of evidence that is contained in the Government VAERS database, as well as settled claims in the National Vaccine Court for deaths and injuries due to the flu vaccine – something most people are not even aware exists.

This is what AI Chat does. It just takes the propaganda from Google or Bing and creates text and narratives around that “information” to combat “misinformation” according to them.

And as I and many others have already reported, and what Microsoft and Google publicly admit, is that if the AI Chat bot does not find the information needed to craft a response, it just makes stuff up.

And yet, this has been unleashed to the public in a massive way, with disastrous results, as it collects the data you supply so it can continue to “train” their AI.

Here is a report just published this week by an investment advisor who shows that AI Chat software is degrading, NOT enhancing, online information.

Brace for an Avalanche of Horrible Online Investment Content, Thanks to ChatGPT

by J. Kim of skwealthacademy substack [29]

Excerpts:

Recently, I’ve been reading a lot about how ChatGPT has been allowing people that still have the ability to work from home to work three or four jobs simultaneously, especially if their jobs involve tasks that ChatGPT is specifically adept at accomplishing, like writing code, fixing problems in code, and writing marketing and ad copy.

There are literally people that work four jobs now, and not because they need to work 16 hours a day to make ends meet, but because they work jobs for which ChatGPT completes 80% or more of the required tasks for each job.

So, it makes sense if ChatGPT enables them to only spend 20% of the normal time required to complete their job tasks that they can work four jobs simultaneously and still have 20% more free time than before.

The Bad News for Investors

Unfortunately, the very utility of AI that numerous employees have been exploiting to earn significant salaries while completing very little work will yield a massive avalanche and proliferation of awful financial and investment guidance online.

And this is bad news for the retail investor that constantly seeks online information to guide their investment strategies.

Already right now as of Q2 2023, the amount of horrible, regurgitated tired and ineffective investment strategies dominate financial and investment articles online and finding any valuable investment guidance online is like searching for a needle in the proverbial haystack.

But it’s about to become worse due to AI. Much, much worse.

There is little doubt that many writers that earn a living providing financial and investment analysis and guidance have already turned to ChatGPT to write articles for them given that the ChatGPT can, for example, condense the several hours it took me to write this article down to less than 20 minutes.

Five minutes for ChatGPT to write the article and perhaps another 15 minutes to tweak the language in the article to make it appear that a human instead of an AI program wrote the article.

In fact, were I to offer up a purely speculative guess, I would surmise that the number of ChatGPT written articles of a financial/investment nature on paid platforms like Substack and others is already challenging the number of non-ChatGPT written articles.

However, as I’ve already stated on my skwealthacademy substack newsletter platform [29]I promise that none of the articles you read on my substack newsletter will ever be partially or majority-written by ChatGPT and that all will only contain original content based upon my research and analysis.

Even if my assumption above is correct, and the trend eventually progresses to the point where 95% of all financial articles written online by authors with paid subscription models are written by ChatGPT, why would this be a terrible thing for all investors?

To start, I’ve experimented with ChatGPT just to analyze the truth and accuracy of its financial writing, and on a scale of 1 to 10, I would rate it a 1, which then means that the vast majority of published financial articles online could degenerate to a one.

For example, just read this article [30] in which I outlined numerous errors and mistakes about gold  as an investment contained in an article that I asked ChatGPT to write.

The reason that so many mistakes were contained in this ChatGPT written article is that ChatGPT draws information in writing its article from extensive archives of online material. This was not the only article I asked ChatGPT to write about a finance/ investment topic but I also asked ChatGPT to write articles about other assets like stocks, bonds, oil, etc, and while it will not provide any “financial advice”, it does forward the same tired, cliched responses about these assets as I’ll discuss below.

In each case, it was evident to me that ChatGPT’s written articles were deploying an algorithm similar to Google’s algorithm in determining what was correct and what was not.

In other words, the messages conveyed through its articles had the same biases that you would find in any American or global financial advisor: anti-gold, pro-USD/Euro, pro-sovereign stock and pro-the most owned stocks, pro-sovereign bonds, anti-oil if the war machine was anti-oil, anti-nuclear energy if the war machine was pro-wind and pro-solar energy (despite the massive inefficiency and high costs of these “green” energies) and so on.

Stating the ChatGPT bias another way, with finance/investment topics, it was basically a mouthpiece for the global commercial investment industry and for the Military Industrial Banking complex.

All of the articles I asked it to write about finance/investing lacked any foresight or ability to accurately predict massive shifts in the investment environment so the content it provided placed an investor well behind the curve instead of ahead of it.

Read the full article. [31]

AI Chat is not going to take over the world, but it most certainly is going to try and silence dissenting voices and create a single narrative that is “acceptable”, according to them, as it collects data on you as you continue to use it, if you have signed up for an account to use one.

See Also:

Understand the Times We are Currently Living Through

Exposing the Christian Zionism Cult [32]

[32]

Jesus Would be Labeled as “Antisemitic” Today Because He Attacked the Jews and Warned His Followers About Their Evil Ways [33]

[33]

Insider Exposes Freemasonry as the World’s Oldest Secret Religion and the Luciferian Plans for The New World Order [34]

Identifying the Luciferian Globalists Implementing the New World Order – Who are the “Jews”? [35]

[35]

Who are the Children of Abraham? [36]

[36]

The Brain Myth: Your Intellect and Thoughts Originate in Your Heart, Not Your Brain [37]

[37]

Fact Check: “Christianity” and the Christian Religion is NOT Found in the Bible – The Person Jesus Christ Is [38]

[38]

Young Man Living on the Streets Finds Jesus of the Bible – Overcomes Drug and “Terminally Online” Addictions [39]

[39]

COVID “Vaccine” Injured Muslim Man Learns COVID was a Scam and Meets Jesus of the Bible as he Begins to Heal [40]

[40]

Was the U.S. Constitution Written to Protect “We the People” or “We the Globalists”? Were the Founding Fathers Godly Men or Servants of Satan? [41]

[41]

The Seal and Mark of God is Far More Important than the “Mark of the Beast” – Are You Prepared for What’s Coming? [42]

[42]

The United States and The Beast: A look at Revelation in Light of Current Events Since 2020 [43]

[43]

The Satanic Roots to Modern Medicine – The Mark of the Beast? [44]

[44]

Medicine: Idolatry in the Twenty First Century – 8-Year-Old Article More Relevant Today than the Day it was Written [45]

[45]

[46]

Having problems receiving our emails? See:

[47]

How to Beat Internet Censorship and Create Your Own Newsfeed [47]

We Are Now on Telegram [48]. Video channels at Bitchute [49], and Odysee [50].

If our website is seized and shut down, find us on Telegram [48], as well as Bitchute [49] and Odysee [50] for further instructions about where to find us.

If you use the TOR Onion browser [51], here are the links and corresponding URLs to use in the TOR browser [51] to find us on the Dark Web: Health Impact News [52], Vaccine Impact [53], Medical Kidnap [54], Created4Health [55], CoconutOil.com [56].