A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Science’ Category

It’s Like Bureaucrats Aren’t Medical Experts

without comments

Lysergic acid diethylamide (LSD), 3,4-Methylenedioxymethamphetamine (ecstacy), and psilocybin (magic mushrooms) are all categorized as Schedule I drugs, which means have no recognized medical use and dangerous to use even under medical supervision. However, as with cannabis, the scheduling of these drugs is being called into question because research is showing that they show a great deal of promise as medical treatments that are safe to use under medical supervision:

Psychedelic drugs like LSD and ecstasy ingredient MDMA have been shown to stimulate the growth of new branches and connections between brain cells which could help address conditions like depression and addiction.

Researchers in California have demonstrated these substances, banned as illicit drugs in many countries, are capable of rewiring parts of the brain in a way that lasts well beyond the drugs’ effects.

This means psychedelics could be the “next generation” of treatments for mental health disorders which could be more effective and safer than existing options, the study’s authors from the University of California.

It’s almost as if the Drug Enforcement Agency (DEA) and the Department of Health and Human Services (HHS), the departments tasked with deciding what drugs fall under what schedule, are composed primarily of bureaucrats who have little or no experience in experimental medicine.

Mind you, this groundbreaking research isn’t groundbreaking. Timothy Leary, a clinical psychologist, experimented with LSD and found that it had many promising medical uses. When he performed his initial experiments, LSD was legal. Experimentation, at least of the legal variety that can be published in journals, became a huge pain in the ass when the drug was listed as a Schedule I. Fortunately, scientists have become more willing to jump through the hoops required to experiment with Schedule I substances, which is why research is now rediscovering the potential medical benefits of LSD and other Schedule I substances. Unfortunately, just because medical scientists have demonstrated that a Schedule I substance actually has potential medical uses doesn’t mean that the bureaucrats in the DEA and HHS are going to change the substance’s scheduling. We know this because cannabis, which has been shown to have numerous medical uses and be perfectly safe to use, still remains a Schedule I substance.

Written by Christopher Burg

July 11th, 2018 at 10:30 am

The Science is Settled… Until It’s Not

without comments

I’m a skeptical man by nature but I tend to be more skeptical of what are traditionally labeled soft sciences such as psychology and sociology. My stronger than average skepticism stems from several factors.

First, and probably most importantly, experiments in these fields can’t isolate variables. When you’re experimenting on humans, one variable is the life experiences of the subjects of your experiment. Different people have different life experiences, which can lead them to act differently under the same circumstances.

Second, the subject of experiments in fields like psychology tend to act differently when they’re the subject of an experiment. This tendency isn’t unique to humans. Ravens and chimpanzees act differently when they know that they’re being watched.

Third, most experiments involving human subjects suffer from selection bias. Professors have a ready pool of humans to experiment on, western undergrads, and utilize them for most experiments. Anybody with even the most basic observation skills will notice that undergrad students tend to behave differently than, say, elderly individuals.

Now I have a fourth reason for my skepticism. It turns out that the findings of many psychological experiments are, to put it nicely, rather dubious:

The Zimbardo prison experiment is not the only classic study that has been recently scrutinized, reevaluated, or outright exposed as a fraud. Recently, science journalist Gina Perry found that the infamous “Robbers Cave“ experiment in the 1950s — in which young boys at summer camp were essentially manipulated into joining warring factions — was a do-over from a failed previous version of an experiment, which the scientists never mentioned in an academic paper. That’s a glaring omission. It’s wrong to throw out data that refutes your hypothesis and only publicize data that supports it.

Perry has also revealed inconsistencies in another major early work in psychology: the Milgram electroshock test, in which participants were told by an authority figure to deliver seemingly lethal doses of electricity to an unseen hapless soul. Her investigations show some evidence of researchers going off the study script and possibly coercing participants to deliver the desired results. (Somewhat ironically, the new revelations about the prison experiment also show the power an authority figure — in this case Zimbardo himself and his “warden” — has in manipulating others to be cruel.)

The problem of manipulation isn’t unique amongst so-called soft sciences. The scientific method generally assumes that the experimenter is unbiased but what happens when the experimenter wants a specific outcome? Oftentimes, they can setup the experiment or manipulate the results in such a way that they can create their desired outcome. This is especially easily to do when the subjects of an experiment are manipulable humans. A little coercion can result in desired behavior.

I’m happy that these issues are finally being scrutinized more thoroughly. But I’m curious what the fallout will be. Science has become a religion to many people. People tend to react negatively when they learn that their priests have been lying to them and that their gods are not actually gods. Part of my worries that the backlash of this scrutiny could be a reflexive opposition to science by the masses but then the other part of me remembers that most fans of science aren’t actually scientifically minded anyways.

Written by Christopher Burg

June 15th, 2018 at 11:00 am

It’s Scientifically Proven

without comments

I find myself ranting more and more about modern practices in scientific communities. I don’t do this because I think science is a bad thing. The scientific method, after all, is just a tool and tools lack morality. I do this because scientism, treating science as a religion, has increasingly replaced science. It seems that many people have forgotten that science also requires a healthy dose of skepticism. Without skepticism, one can publish any old paper and people will believe its findings without question. This is rather worrisome when there are so many ways for bad or at least questionable science to get published:

This has huge implications. Evidence based medicine is completely worthless if the evidence base is false or corrupted. It’s like building a wooden house knowing the wood is termite infested. What caused this sorry state of affairs? Well, Dr. Relman another former editor in chief of the NEJM said this in 2002

“The medical profession is being bought by the pharmaceutical industry, not only in terms of the practice of medicine, but also in terms of teaching and research. The academic institutions of this country are allowing themselves to be the paid agents of the pharmaceutical industry. I think it’s disgraceful”

This article discusses a great deal of corruption in the scientific medical community. It turns out that much of the medical science that we take for granted is tainted. One of the most interesting forms of chicanery, at least in my opinion, is selective publishing:

Selective Publication — Negative trials (those that show no benefit for the drugs) are likely to be suppressed. For example, in the case of antidepressants, 36/37 studies that were favourable to drugs were published. But of the studies not favorable to drugs, a paltry 3/36 were published. Selective publication of positive (for the drug company) results means that a review of the literature would suggest that 94% of studies favor drugs where in truth, only 51% were actually positive.

End users, like doctors, often go by published studies. If 94 percent of published studies indicate that a drug is effective, doctors are more likely to prescribe that drug. However, if the 94 percent only exists because the large number of studies that indicated that the drug was ineffective weren’t published, the end user is often unaware. Moreover, if they are aware, they generally don’t know why the studies showing the drug to be ineffective weren’t published. Was it due to methodological failures on the part of the individuals performing the study or was it because an executive for the drug manufacturer is also on the board that decides what does and doesn’t get published? And to make matters even more difficult, just because a study was published doesn’t necessarily mean that the findings in the study are reproducible. The findings of many studies cannot be reproduced.

This wouldn’t be as big of a problem if so many people didn’t treat published research as holy scripture. But a lot of people do. Like a Christian who flips through the Bible searching for a line that supports their agenda, many people today will search for scientific papers that support their agenda. When they find it, they will throw it down as a trump card and act as if their agenda is unassailable because it’s “backed by science.” But is their agenda backed by science? Are the findings in the paper they threw down reproducible? Were several studies refuting the study they threw down rejected from publication by somebody who shares their agenda? There really is no way for you to know.

Written by Christopher Burg

April 13th, 2018 at 11:00 am

Posted in Science

The Scientific Method Doesn’t Prove Truth

with 2 comments

Yesterday I ranted about the tendency of individuals to use unspecific and subjective statements in political discourse. Today I want to rant about a similar tendency, the tendency of individuals to claim that something is scientifically proven (with the implication being that it has been scientifically proven true).

The scientific model involves a continuous cycle of making observations, thinking of interesting questions, formulating hypotheses, developing testable predictions, testing those predictions, and modifying the hypotheses based on the test results. If a test demonstrates that a hypothesis is false, the hypothesis can either be rejected or modified so that the cycle can continue.

The important thing to know about this cycle is that it never proves truth. A hypothesis might continue to be treated as true so long as no experiment shows that it’s false. But just because a lot of experiments have failed to show that a hypothesis is false, doesn’t prove that the hypothesis is true. A hypothesis might survive a million tests but that doesn’t mean it has been proven true. The 1,000,001st test could demonstrate that the hypothesis is incorrect, in which case it might be rejected entirely or modified based on the new information learned from the test and subjected to more tests.

Saying that something has been scientifically proven (true) doesn’t mean that that thing is true. It means that it hasn’t yet been proven false. While the difference between the two statements may appear to be subtle, it is important. The first statement makes a position appear unassailable, which is probably why so many people like to claim that their position is based on scientific truth. The second statement acknowledges the possibility that the basis of the position could be incorrect, which leaves the door open to changing positions based on new knowledge.

Written by Christopher Burg

March 30th, 2018 at 10:30 am

Posted in Science

Tagged with

Free Akkadian Dictionary

without comments

It probably won’t surprise anybody to find out that I’m a language nerd. Although I’m only fluent in English at this point and have a decent understanding of both Esperanto and Latin, I love to learn about all of the different mechanisms that humans have developed to communicate with one another. I especially love learning about ancient languages. Earlier this year I read a book on cuneiform, the earliest known writing system, and was fascinated by how the systems worked (it’s a real hodgepodge compared to the written alphabet we use for English today).

For the last 90 years scholars at the University of Chicago have been compiling an Akkadian dictionary. That near century of effort has finally bore fruit. The University of Chicago has released its 21 volume Akkadian dictionary and best of all the PDFs are free (buying the physical volumes will set you back over $1,000). If you have any interest in learning about Akkadian, head over to the University of Chicago’s website and start downloading all of the volumes.

Written by Christopher Burg

August 31st, 2017 at 10:00 am

Posted in Science

Tagged with ,

Lies, Damned Lies, and Statistics

without comments

Many people like to divide science into hard and soft. Hard sciences are the ones where you can directly apply the scientific method whereas soft sciences don’t lend themselves well to the scientific method. For example, physics is generally considered a hard science since you can replicate the results of previous experiments with new experiments. Sociology, on the other hand, doesn’t lend itself well to the scientific method because the results of previous experiments often can’t be replicated by new experiments. As if to acknowledge that fact sociologists tend to rely heavily on statistics.

In our modern world where science is the new god you can’t make an argument without somebody demanding to see your scientific evidence. While such demands make perfect sense in debates about, say, physics, they don’t make much sense when it comes to social issues because you can create statistics that prove whatever you want. Case in point, a research project found that one in every 24 kids in the United States has witnessed a shooting. However, the statistic was created through a survey with a question worded in such a way to guarantee a predetermined result:

It all started in 2015, when University of New Hampshire sociology professor David Finkelhor and two colleagues published a study called “Prevalence of Childhood Exposure to Violence, Crime, and Abuse.” They gathered data by conducting phone interviews with parents and kids around the country.

The Finkelhor study included a table showing the percentage of kids “witnessing or having indirect exposure” to different kinds of violence in the past year. The figure under “exposure to shooting” was 4 percent.

[…]

According to Finkelhor, the actual question the researchers asked was, “At any time in (your child’s/your) life, (was your child/were you) in any place in real life where (he/she/you) could see or hear people being shot, bombs going off, or street riots?”

So the question was about much more than just shootings. But you never would have known from looking at the table.

That survey was then picked up by the Center for Disease Control (CDC( and the University of Texas (UT) who further twisted the research:

Earlier this month, researchers from the CDC and the University of Texas published a nationwide study of gun violence in the journal Pediatrics. They reported that, on average, 7,100 children under 18 were shot each year from 2012 to 2014, and that about 1,300 a year died. No one has questioned those stats.

This is how statistics is often used to create a predetermined result. First a statistic is created, oftentimes via a survey. The first problem with this methodology is that surveys rely on answers given from individuals and there is no way to know whether or not the people being surveyed are being truthful. The second problem is that survey questions can be worded in such a way as to all but guarantee a desired result. Once the results from the survey have been published then other researchers often take them and use them inappropriately to make whatever point they want, which is what happened in the case of the CDC and UT. Finally, you have a bunch of people making arguments based on those questionable statistics used erroneously by organizations that share their agenda.

Written by Christopher Burg

July 5th, 2017 at 11:00 am

On an Editorial Board, Nobody Knows You’re a Dog

without comments

“Where’s your peer reviewed paper,” is a question many people instinctively ask when you present an idea that conflicts with one of their beliefs. The idea of requiring scientific peers to review research papers before they are considered scientifically sound is a good one. However, peer reviews are only as good as the people reviewing them. Many “scientific” journals exist not to verify scientific vigor but to prey on gullible researchers who are often new to their field. When such journals review a scientific paper you don’t know if the review was done by a human being or a dog:

Ollie’s owner, Mike Daube, is a professor of health policy at Australia’s Curtin University. He initially signed his dog up for the positions as a joke, with credentials such as an affiliation at the Subiaco College of Veterinary Science. But soon, he told Perth Now in a video, he realized it was a chance to show just how predatory some journals can be.

“Every academic gets several of these emails a day, from sham journals,” he said. “They’re trying to take advantage of gullible younger academics, gullible researchers” who want more publications to add to their CVs. These journals may look prestigious, but they charge researchers to publish and don’t check credentials or peer review articles. And this is precisely how a dog could make it onto their editorial boards.

The peer review process, like many things surrounding the scientific method, is often poorly understood by laymen. To those who have hoisted science onto a religious pedestal the words “peer review” are more of a magical incantation that makes the words that follow infallible. To those who understand the scientific method the words “peer review” means that the credentials of the peers need to be verified before their review is given any weight.

There are a lot of scam artists out there, even in scientific fields. Don’t trust research just because it was peer reviewed. Try to find out whether the peers who reviewed the research are likely knowledgeable about the subject or are really just a bunch of dogs.

Written by Christopher Burg

May 31st, 2017 at 10:30 am

It’s Science!

without comments

Reason posted an article claiming that research shows that you can’t even pay somebody to read information that contradicts their beliefs. However, if you read the about the methodology you learn that the researchers didn’t offer to pay people to read information that contradicted their beliefs:

The study gave participants two options: they could read an article about same-sex marriage that matched their own perspective, or they could read an article about same-sex marriage that contradicted their views on the subject. They were told that if they selected the article with which they disagreed, they would be entered in a drawing to win $10. But if they selected the more comforting, self-affirming article, they would only stand to win $7.

Being entered into a lottery isn’t payment, it’s a chance at payment.

I bring this article up to illustrate how poor research can quickly lead to stupid conclusions and headlines. Initially reading the research might lead one to believe that it gives evidence to the possibility that some people won’t read contradicting information even if there is a reward. But when you stop to think about the methodology used you quickly realize that the research was inadequate at addressing incentive. Some people might not be willing to read contradicting information for a chance to be entered in a lottery with a slightly better payoff but they might be willing to do so for straight up cash. $10 might not convince some people to read contradicting information but $20 or $30 might.

I also bring this article up because it shows that neocons and neoliberals aren’t the only people who allow themselves to use poor research to reach a desired conclusion. Libertarians can and do fall into that trap as well.

Written by Christopher Burg

May 17th, 2017 at 11:00 am

The Religion of Science

without comments

One might get the impression that I’m opposed to science based on how much I’ve been harping on scientism as of late. Truth be told, I’m actually a huge advocate of science, which is why I’m investing so much time into criticizing scientism.

Science is supposed to be about using observations to develop hypotheses and testing those hypotheses through experimentation. It’s supposed to be different from faith. But most of the people cheering the greatness of science are treating it as a religion. Scientists are being treated like priests, their words are being treated as law and their characters are being treated as sacred. This has lead to religious zealotry:

In late July 2014, a Twitter user named @dogboner posted a photo of a man on a subway train working on his laptop, accompanied by the caption, “Some guy using his laptop on the train like a dumbass nerd lol.” The “dumbass nerd” in question was astrophysicist, author and TV host Neil deGrasse Tyson. Instantly, “@dogboner” (whose real name is Michael Hale) faced a tweet-storm of abuse and haranguing from social media users for whom Tyson has emerged as a kind of messiah of modern rationalism.

The photo was shared on the popular Facebook page “I Fucking Love Science” (which currently engages some 25 million-plus users), leading to even more angry call-outs. Hale was called “stupid,” an “underachieving burnout,” and worse. One person encouraged Hale to “fall into an ocean of A.I.D.S.” Few had bothered to consider that the original tweet was nothing but the sort of stupid, ironized joke that savvy Twitter users major in. Legions of self-satisfied rationalists and armchair logicians who pride themselves on their superior intellect were effectively fleeced.

Beyond being (really, really) funny, the incident was revealing. It spoke to the vehemence and belligerence science seems to inspire in popular culture. It also laid bare the frothing cults of personality surrounding people like Tyson, Bill Nye, Canadian astronaut Col. Chris Hadfield (who live-streamed parts of his 2013 mission to YouTube, including a much-shared acoustic guitar rendition of David Bowie’s “Space Oddity”), and other modern pop-star scientists.

The irony, of course, is that most of the people who lashed out at Mr. Hale probably don’t know any scientists who don’t regularly appear on television. In this way they mimic many self-proclaimed Christians who are only aware of popular televangelists and wouldn’t recognize the names of even well-known historical theological scholars.

I’m going to blame the government indoctrination system that is often mistakenly called an education system. Government indoctrination centers tend to teach by authority. What the teach says is supposed to be accepted by the students with blind obedience. Everything written in the textbooks is supposed to be accepted as truth. Students who question the teachers or the textbooks are often dismissed with a wave of the hand or outright punished. Unfortunately, imprinting this system on children at a young age likely makes them seek out authority figures instead of seeking out knowledge.

Neil deGrasse Tyson, who I have never met but would enjoy getting a beer with sometime, has become one such authority figure. People seeking out an authority figure on science have latched onto him, as many Christians latch onto televangelists, because he’s charismatic and entertaining. However, it’s no crime to be entirely unaware of him, especially if one’s interests aren’t in astrophysics. Likewise, it’s no crime to be entirely unaware of Aziz Sancar. Who is Aziz Sancar? He’s a microbiologist who won the Nobel Prize in Chemistry. I’m not a chemist so I was also unaware of him and only found him when doing a search for scientists who have made notable accomplishments but haven’t enjoyed appearing on every television channel known to man. My point is that most self-proclaimed lovers of science are probably entirely unaware of his existence and that’s OK.

Science ceases to be science when it becomes blind faith and cults of personality. The masses currently demanding science-based policies appear to be primarily composed of worshipers of scientism, not people with an actual understanding of the scientific method. They don’t want science-based policies, they want policies inspired by the sermons of their priests.

Written by Christopher Burg

May 12th, 2017 at 11:00 am

Limitations of Experience

without comments

Bill Nye has gained himself a great deal of admiration and hatred by positioning himself as a public face of scientism. A lot of progressives, who tend to side with scientism, are now holding up Bill Nye as a god. Meanwhile, a lot of conservatives, who tend to side against scientism, are now holding him up as a devil.

The debate of scientism has more or less become a debate between progressives and conservatives, which means a tit for tat has developed. Conservatives are lambasting one of the progressive’s public faces so they now need to lambast one of the conservative’s public faces. For the conservative’s tit the progressives have chosen Mike Rowe as their tat:

This image further demonstrations that the biggest advocates of scientism have a severe lack of understanding of the scientific method. The opening words in the image make sense within the framework of the debate. Conservatives have been arguing that Bill Nye lacks experience in scientific fields and is therefore unqualified to speak about scientific matters. In return the progressives are pointing out that Mike Rowe lacks experience in the trades. Here’s the problem, science isn’t a single discipline.

By the logic presented in the image one would certainly listen to Bill Nye on matters of mechanical engineering (at least matters that fall within his area of expertise). However, one would completely ignore anything he said about other scientific fields, such as the effects of widespread pollution on the biosphere, since he has no experience in those fields.

Of course, both sides are being foolish. The progressives’ implication that expertise in one scientific field gives an individual expertise in all scientific fields is wrong. But the conservatives’ implication that professional training is what indicates an individual’s expertise is equally wrong.

It’s quite possible for an individual to be very capable in one field and incompetent in another. Ben Carson is a great example of this point. He was a very skilled neurosurgeon. But his comment about pyramids being grain silos shows that his knowledge in the field of archeology is, to put it nicely, lacking. Likewise, it is also possible for an individual to be very capable in a field that they don’t work in professionally. Hedy Lamar had no formal scientific training yet she made several important discoveries, such as using frequency hoping to prevent enemies from jamming radio-controlled torpedoes (which was also an important contribution to the development of several wireless communication technologies that we rely on today).

In summary, both sides are being stupid. Each side is slinging mud at one of the other’s public face instead of debating the actual issues. One might expect such behavior from conservatives since they’re not beating the scientism drum but why would progressives, who claim to be believes in science, do the same thing? Simple, progressives are no more lovers of science than conservatives. They wield science much like conservatives wield Christianity. That is to say, they see science as nothing more than a concept they can exploit to forward their political goals.

Written by Christopher Burg

May 4th, 2017 at 11:00 am