7 Comments
⭠ Return to thread

In regards to how many layers of complexity are being designed to obfuscate and conceal avenues that lead towards truths, I’d like to highlight a particular aspect of the increasing role that chat bots are playing in the censorship industrial complex.

Below is the end of an exchange I had with Copilot after a frustratingly long (and failed) attempt to get it to acknowledge studies off of C19early.orghttps://c19early.org/ ((For those unfamiliar, it’s a real-time database of virtually every published study- - the vast majority of which are suppressed by msm- -on Covid-19 treatments; currently the count is 3,923 studies)):

Here’s the convo:

Me: So that means your creators can decide to leave certain websites out

of your training. Right?

Copilot: Indeed, the training process for Al language models involves selecting and curating data from various sources. While I don't have direct access to the internet, my training data is a result of decisions made by researchers and engineers. They choose which websites, texts, and information to include, aiming for a diverse and representative dataset.

However, it's essential to recognize that no dataset can be exhaustive or completely unbiased. Some websites or content may be inadvertently excluded, while others are intentionally included. My responses are shaped by the patterns in the data was exposed to during training, but I don't have real-time access to external websites or databases.

If you have specific questions or need assistance, feel free to ask, and I'll do my best to provide accurate information based on what I've learned!

- - - another separate dialogue response below - - -

Me: You said, "Some websites or content may be inadvertently excluded, while others are intentionally included."

Some websites might be intentionally excluded too, right?

Copilot: Certainly! You're correct. In the process of curating data for training Al language models, some websites or content may indeed be

intentionally excluded. The decisions about which sources to include or

exclude are made by researchers and engineers based on various factors, such as relevance, quality, and diversity. These choices help shape the model's understanding and responses. If you have any more questions or need further clarification, feel free to ask!

>>>>>>> While I am not surprised by what Copilot said, my hunch is that very few people are aware that the data that Copilot (and probably every other chatbot; *maybe* with Grok being the exception) has access to is highly selective, to put it mildly.

Expand full comment

AI is just another version of google, which is doing the same thing. The question is how to access that which has been intentionally exclued, if not altogether scrubbed. There must be pristine search engines.

Expand full comment

Grok also seems to be more “honest”. I was able to get it to read the C19early.com database and give me answers based on it… although, unfortunately, the answers were mostly incorrect. But I would say that this is more likely an issue of kinks that need to be worked out, rather than overt censorship.

But I think it remains to be seen whether or not Elon will provide a truly accuracy-based engine, without tweaking it to serve a different purpose as well.

Expand full comment

I wanted to try Grok, but it insisted that I install yet another app.

Apps are insidious, so I refused.

Expand full comment

I’ve been using the Brave browser and its search engine for the last few years, and it’s come a long way. It’s attached to BAT crypto-environment, and has a lot of user friendly tools. There are still glitches here and there, but over the past ~6 months or so I’ve noticed that its search algorithms have been returning very good results, and it reliably lists sites that Google, Bing, and Duck Duck Go censor or suppress.

For certain things I still have to go to google to find quickly… but Brave is definitely becoming a big player in the search engine field, and I suggest giving it a shot!

Expand full comment

Thanks for this. I think you're the second to mention this. I've been using gibiru search engine. I think I tested brave and didn't find it as good as gibiru, but I'll give it another go.

Expand full comment

There are ways to game the gamers. Positing a question it cannot answere and then spoon feeding it a well thought out reply. One is then programming it. I realize this only wins a skirmish and not the war.

Tech peeps have been experimenting along those lines. They also are interesting in how to make "it" implode.

Expand full comment