Social credit score concept image, AI analytics identify person technology, Intelligent rating, reputation.

by Brian Shilhavy
Editor, Health Impact News

Recently we reported that China’s new Social Credit Score system resulted in millions of Chinese citizens being banned from public travel due to social credit scores that were too low:

The Chinese government operated English news site, Global Times, tweeted recently that the government had restricted 2.56 million people from purchasing plane tickets, and 90,000 people from buying high-speed rail tickets during the month of July, due to too low of a social credit score.

China’s new social credit system will be fully operational by 2020, but it is apparently already in place across China.

Like a person’s financial credit score, a social credit score can move up or down based on your social behavior. In recent years, China has increasingly monitored its citizens activities through social media, and extensive facial recognition software.

“Bad behavior” can lower an individual’s social credit score, and “good behavior” can increase it.

Reported examples of “bad behavior” include: playing too many video games, spreading “fake news,” drinking too much, walking your dog without a leash, smoking where you are not supposed to, or simply being too noisy on a train or bus. (Full Story.)

In that article we asked:

Could something like this be implemented in the United States?

According to Mike Elgan, writing for Fast Company, such a system is already being developed here in the U.S. by Silicon Valley technology companies, apart from the U.S. Government.

Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Elgan goes on to cite some examples of companies already using a social credit system:

Insurance companies

The New York State Department of Financial Services announced earlier this year that life insurance companies can base premiums on what they find in your social media posts.

That Instagram pic showing you teasing a grizzly bear at Yellowstone with a martini in one hand, a bucket of cheese fries in the other, and a cigarette in your mouth, could cost you. On the other hand, a Facebook post showing you doing yoga might save you money.

Airbnb

Airbnb can disable your account for life for any reason it chooses, and it reserves the right to not tell you the reason. The company’s canned message includes the assertion that “This decision is irreversible and will affect any duplicated or future accounts. Please understand that we are not obligated to provide an explanation for the action taken against your account.”

The ban can be based on something the host privately tells Airbnb about something they believe you did while staying at their property. Airbnb’s competitors have similar policies.

Uber 

It’s now easy to get banned by Uber, too. Whenever you get out of the car after an Uber ride, the app invites you to rate the driver. What many passengers don’t know is that the driver now also gets an invitation to rate you. Under a new policy announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.

It is easy to see where this is all going.

As we reported earlier this week, Google is now a pharmaceutical company (as is Amazon.com), and Google controls over 90% of the Internet searches and can control their search engine to display the information they want you to see.

Alternative health doctors who do not tow the party line on pharmaceutical drugs, like Dr. Joseph Mercola and his website Mercola.com, are being delisted in the search engine. (Source.)

They have already threatened to skew their search results during the next election site to make sure the candidate they choose is elected, for example. (Source.)

And earlier today, I published my experience with a company called NewsGuard, which is trying to stifle the speech of anyone who goes against the narrative the corporate media wants to portray, and therefore rated Health Impact News as a fake news site. See:

Self-Appointed Internet Police Declare MedicalKidnap.com and DOJ Vaccine Court Reports Fake News

Is the American public going to allow the Technology giants to determine what is proper social behavior, who is qualified to hold political offices, and what is fake news and what is not?

Will it take massive protests (hopefully peaceful!) in streets as we have recently seen in Hong Kong to turn the tide against technology tyranny here in the U.S.?