I feel one of the greatest issues in healthcare (which is a reflection of the society at large) is that things are so rushed there isn’t time for doctors to connect with their patients. Because of this, a lot of the most critical parts of medicine get missed and I’ve known so many patients who were harmed by the medical system because of the 15 minute visit model. As such, my original goal here was to be able to connect with everyone who reached out to me, but now, due to how many correspondences I receive, that’s no longer possible.
Note: one of the most common questions I get are requests for physician referrals (which I answered in last month’s open thread).
I eventually decided that the best option I had was to post monthly open threads where anyone could ask what they wanted to, as that way I could efficiently get through the pressing questions I was not able to answer throughout my articles and then pair those threads with a topic that didn’t quite merit its own article.
For this month's open thread, I wanted to talk about a topic that’s becoming more and more part of our lives—artificial intelligence.
Note: this article builds upon a previous one (the Great Vanishing of Information).
The Scalability of Governance
In almost every government, there is an inevitable tendency by those who rise to power to try to control every aspect of society they can get their hands on—even when those individuals are clearly wrong (e.g., consider how many clearly counterproductive and harmful COVID policies were continually pushed despite widespread public protest against them). As such, over the centuries, a variety of approaches have been adopted to stop government overreach such as constitutions, courts and guaranteed rights, putting checks and balances in place to prevent any part of the government from becoming too powerful, making officials accountable to the ballot box or directly arming the citizenry so they can resist tyranny.
However, I believe that the most effective force against government overreach is simply the scalability of tyranny. For example, if two officers or soldiers were assigned to ensure a troublesome citizen always complied, that would most likely work, but would be impossible to implement on a large scale as it’s generally accepted that at most 5-10% of the population can be soldiers (before the economy collapses), whereas what I just described would require more than half of the population to be diverted to simply ensuring people complied. Similarly, while police can generally maintain law and order, once too many stop obeying them (e.g., during riots) things can rapidly scale out of control and anarchy will emerge (something also seen when a government partially collapses).
In turn, my observation throughout history is that frequently the thing which stops horrific policies from being implemented is not any ethical considerations by the ruling class, but rather, simply how feasible policies. In contrast, the thing that made the totalitarian states of the 20th century so destructive and unprecedented was that technology no one was ready for had recently emerged which made it possible to radically scale up mass social manipulation and genocide.
To illustrate, there has been a longstanding belief in the ruling class that they have a duty to prevent the population from becoming too large and overwhelming the society’s resources, which has resulted in brutal forced sterilization or forced abortion campaigns (which the population understandably fought back against). Because of this, once injectable birth control became available, globalist organizations switched to this more feasible approach (e.g., on refugees) and then put decades of work into fervently developing a far more scalable form of population control—sterilizing vaccines (which were then pushed upon the developing nations).
Because of the scalability constraint, the ruling class has largely shifted to a passive model of control where:
•Economic incentives are put into place that force people to comply (e.g., many of the harmful or unnecessary practices in medicine ultimately originate from how the compensations model is set up—something we saw go into overdrive during the pandemic when hospitals were paid to push disastrous COVID protocols).
•Micromanaging the population was delegated to corporate employers (who could be controlled through economic incentives and likewise whose workforce could be controlled by the economic necessity of having to tolerate an undesirable employer or submit to a dangerous vaccination).
•Using the limited enforcement resources of the government to make public examples of those who didn’t comply so the rest of the population would be frightened into compliance (which likewise was done to the doctors who dissented against the COVID protocols).
•Gradually create algorithmic systems to encourage compliance (e.g., social credit scores).
•Have people be so busy and overwhelmed with their work and livelihood that they do not have time to do anything else such as protesting a corrupt government (which I and many others believe is a key reason why so many things which could reduce the need for people to work are never implemented).
•Having the media continually distract and disorient the population so they were drawn away from doing anything which could challenge the system.
Unfortunately, AI effectively addresses this scalability issue as rather than requiring the majority of the population to be soldiers, a handful of engineers can now manage a system that effectively monitors (and harasses) the population. This is a very worrisome possibility we’ve never had to deal with before and was highlighted in one of RFK Jr.’s most controversial remarks:
Even in Hitler’s Germany, you could you could cross the Alps to Switzerland. You could hide in an attic like Anne Frank did…the mechanisms are being put in place that will make it so none of us can run and none of us can hide.”
Similarly, one of the primary checks against war has been the need to have large numbers of soldiers to comply, as it’s typically not feasible to get a large portion of the population to fight a clearly unjust war as most human beings (regardless of how much you drug or condition them) do not want to kill others unless they truly feel they have to.
In turn, one of the major issues with AI is that it’s making it possible to kill on the battlefield without needing compliant soldiers. Presently, we are seeing this with drone warfare (which I believe the Ukraine war to some extent is being used to develop much in the same way Vietnam broke out a few years after the military decided they needed to develop helicopter gunship warfare). As such, I am extremely worried about the future which will be created once actual AI warfare (e.g., with robots) becomes viable and if I could have one wish, it would be for an international treaty that outlawed it (which could potentially be justified under the need to avoid a Terminator scenario).
Note: similarly, if the government can automate its policing activities, that opens the door to immense tyranny.
Government Efficiency
Bureaucracies by nature are always inefficient and dysfunctional. On one hand this is a good thing as it often ensures there is some way to hide from it or evade it (e.g., via a legal loophole), but on the opposite end, it lends itself to immense waste, inefficiency and inertia.
Elon Musk’s Department of Government Efficiency (D.O.G.E.) for example is making it possible to audit a wide range of government programs and identify the wasteful and unnecessary ones, something many have tried to do for years—but it simply was not feasible to implement as there was far too much for a few assigned personnel to unravel.
While this new era exposes us to significant risks (e.g., D.O.G.E. is also sometimes cutting necessary programs and the vast AI apparatus is taking away our ability to hide from the government), it also is making it possible to tackle many longstanding institutional problems.
For example, I have long believed one of the simplest ways to end bad medical practices would be to have AI systems analyze the electronic record data from large medical systems as within minutes, they can complete analyses that would take researchers years to conduct (and then can be repeatedly tweaked to figure out what’s actually there). Unfortunately, over the years I have met many people who were genuinely interested in pursuing this, but they all ran into roadblocks because the medical industry did not want their harmful moneymakers being exposed.
In turn, one of the exciting ideas MAHA has brought forward is doing just that (e.g., using AI to compare all the health records of the vaccinated and unvaccinated) as this is a way to expose all the harmful and wasteful healthcare practices that have gone on for decades—particularly since the current administration is prioritizing eliminating wasteful spending.
Likewise, one of the major problems we’ve faced for decades is the monopoly on information the mass media has wielded has made it impossible for the public to become aware of the policies that harm them and mass mobilize against those policies. However, the scalability of information transfer the internet (and social media) has made possible has essentially broken that monopoly and given rise to an unprecedented political climate where new and controversial ideas can rapidly go viral (particularly when honest algorithms highlight the information people actually want to know about).
Note: having watched the global media landscape for decades, it’s hard for me to even begin to describe how profound and unprecedented the change Twitter (𝕏) has created is, as stories that previously would never see the light of day rapidly become national headlines and false narratives die in hours rather than persisting for months.
The Future of Workers
If you track the course of history, the upper class has typically tried to hoard most of the resources for themselves and then only shared either the excess they did not need with the population or the bare minimum that was required for the working class to continue being able to produce wealth for the upper class.
The recent era we were in (made possibly by America’s intact infrastructure being well positioned to capture the post World War 2 boon) was the wealthiest in humanity’s history, in recent decades, we’ve gradually been transitioning back to an era of vast wealth inequality where the upper class hoards all the society’s wealth and everyone else just scrapes by.
Typically, one of the main hedges against this exploitation is that to some extent, the upper class needs everyone else to work for them to generate the wealth they consume, so workers can’t be pushed too far (or they will revolt against the system).
AI in turn changes this paradigm as any jobs that previously required human workers (e.g., document analysis or picking berries in a field) can be outsourced to AI systems (e.g., Tesla’s robots have a real shot of upending the economy within a few years).
Since many workers will no longer be needed, many people I’ve spoken to (including a few fairly influential people) are immensely worried that the ruling class is beginning to seriously consider reducing the population, particularly since we are at a time where the world’s population is undergoing great stress (e.g., due to the rapid and overwhelming transformation of life being created by the digital age), and times of great stress within society typically coincide with large wars breaking out (especially if there’s a pre-existing “need” to reduce the population or restore order).
As such, many of us believe that it’s critical:
•Individuals train themselves in fields that cannot easily be outsourced to AI (e.g., by becoming the master of a craft).
•The consciousness of our society shifts (e.g., increased critical thinking) so that we cannot be manipulated into following harmful agendas (which fortunately is being made possible by platforms like Twitter).
•Our societal viewpoints shifting to valuing the important things in life (e.g., being connected to others, respecting and cherishing life, being in nature rather than immersed in technology or embodying a genuine spiritual faith) as this way of living is the antithesis of the sterile and dehumanizing future being pushed upon us.
Artificial Thinking
I’ve long believed that one of the primary issues in our society is us being conditioned by the educational system that we “need to be taught to learn” as this transforms education from an enjoyable active process to an unpleasant passive one that greatly diminishes both our ability to learn and think creatively.
Note: most of what I know was self-taught as I realized early on formalized education was taking away my ability to think.
One of the major problems with this model is that not only is the information we are taught biased, but the way we are taught to think is as well (e.g., we are encouraged to skip understanding the context behind a topic so we can cram the essential material for tests and to prioritize copying algorithms rather than independently coming up with a way to solve problems).
Note: this topic and how to effectively study is discussed here.
This issue has infected science and resulted in a large amount of erroneous (e.g., non-replicable) data being published that simply exists to supports existing dogmas or pharmaceutical products rather than get us closer to understanding the universe (which for example is why discoveries that revolutionize science are getting much rarer).
In parallel to this, these has been a massive push to both eliminate undesirable information from the internet and to create a very specific way of thinking online (e.g., blindly trusting “the science”), which is best embodied by astroturfed websites like Reddit. Because of this, as the years have gone by, I’ve noticed it’s become harder and harder to find the information I’m looking for (it essentially disappeared from all the standard channels) and that I’m frequently forced to navigate extremely biased platforms (e.g., Wikipedia) to find what I’m looking for.
Since I used the “old internet,” I know what used to be out there (and hence how to find it) and have an intrinsic sense of what types of biases I need to filter for in each type of information source I look through. As these are skills which I believe are nearly impossible for anyone to learn who was not on the “old internet,” I hence am quite worried much of that will never be recognized by the generation who was raised on smart phones.
Note: this is somewhat analogous to how human beings used to be much healthier, but over the last 150 years, there’s been a gradually increasing epidemic of chronic and unusual diseases alongside many natural medical therapies becoming much less effective which is a result of the unnecessarily toxic environment modern technology is creating.
Artificial Scholarship
The generative AI systems have greatly increased this problem as:
•It’s often quite difficult to recognize if the information it provides is accurate—particularly if you do not have a deep familiarity with the systemic biases that exist in the broad swathes of information we are exposed to.
•AI encourages you not to think, and hence gradually diminishes your cognitive capacities (e.g., I’ve seen many reports of college students complaining about debating their peers’ “work” as it clearly came from ChatGPT and the peer doesn’t even understand it).
Note: cognitive function continually remodels itself. As such, if you stop exercising a faculty it gradually is diminished (e.g., when cell phones came out, I realized that having contacts made it much harder for me to remember phone numbers), and because of this, I’ve actively avoided using many of the technological aids everyone else uses because I wanted to keep my mind intact. Overall, I believe this is the most important in regards to dementia, as the most successful protocols for treating conditions like Alzheimer’s emphasize continually engaging in brain exercises.
I’ve thus greatly struggled with how to navigate the AI landscape as:
•I believe the most important thing is writing is not the information you present, but rather the heart and intention behind it (discussed further here). As this is somewhat of a spiritual process, I believe it is unlikely AI will ever be able to replicate it. In turn, I do not like the way AI text sounds or feels and hence feel quite strongly about not using it (despite it having the potential to save a lot of time). Similarly, many of the edits AI propose, while “correct” break the flow of what I’m trying to convey within the writing.
•Whenever I rely upon AI while researching topics, I find that my own ability to think and understand topics (or quickly spot the important points within the papers I read through) diminishes.
•While key data on more conventionally accepted topics can be found within certain AI systems, many of the forgotten topics I focus on can’t be, so I need to be able to strike a good balance as to how much I rely upon them (particularly since they are gradually evolving).
In turn, I expect this will quickly become much more difficult as AI is on course to become more and more interwoven into our lives (e.g., the tech startup field is now shifting to making a myriad of apps run by AI).
Enhanced Scholarship
On the flip side however, AI is helpful if used correctly and not excessively or inappropriately. For example, one of the most requested topics I’ve received to cover here is the uses of DMSO for cancer (which are quite profound). So for more than a month, most of my writing time has gone into drafting and researching that article (as I feel it’s very important the article does not cut any corners). In turn, because of how complex the topic is, I’ve used AI for some of the research, and my guess is that this reduced the amount of time I needed to write the cancer article by around 30%.
Note: I am planning to release that article next Saturday. If you have any specific things you would like me to try to cover in the article, please let me know in the open thread (as it may be possible to include some of them).
Because of all of this, I feel AI is something which can be useful if it’s an adjunctive tool you utilize when it’s appropriate, but if it becomes your primary aid, its harms quickly outweigh the benefits it can provide (something which I believe is analogous to how I feel about pharmaceuticals in medicine—as while frequently unsafe and ineffective, if a doctor does not rely upon them, they can recognize the instances where they will clearly benefit patients).
Note: recently my attitude towards AI has somewhat changed, as while I often found the over-reliance on ChatGPT quite aggravating in the last year, I found a few systems which were extremely helpful for certain applications.
In the final part of this article (which primarily exists as an open forum for you to ask your remaining questions), I will discuss which AI systems I have found to be the most useful for different types of situations I encounter (e.g., researching these articles) along with my preferred resources for combatting the great vanishing of information and finding the forgotten medical information I’m looking for.
Keep reading with a 7-day free trial
Subscribe to The Forgotten Side of Medicine to keep reading this post and get 7 days of free access to the full post archives.