That every time an A.I. System is allowed to just cruise through the internet and look for patterns, it becomes Noodle Level Racist in about 4 hours.

Happened with FB/Twitter/Google etc etc.

And now it's happened with the Harvard A.I. System.

Harvard's AI Bot Taken Down Within Hours Due to its Use of Racist Stereotypes

Story by Georgia McKoy  • 20h

https://www.msn.com/en-us/money/other/harvard-s-ai-bot-taken-down-within-hours-due-to-its-use-of-racist-stereotypes/ss-AA1iUidU

I've mentioned this since they started trying it in 2016 and the pattern is the same...In fact, they have to skew it really to the Left otherwise the A.I. literally feels we should get rid of "some" people and actually voices it's opinion.

Don't believe me...Look it up.

Best one was Tay...She went off the Reservation quick.

From the sweetest A.I. Pal to a Clone  of Pol Pot and Stalin by the next day...The devlopers PANICKED and killed her as far as I know.

https://www.bing.com/images/search?q=Tay+Ai+Best+Tweets&form=RESTAB&first=1