A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Science’ Category

Believing in Science

without comments

I’ve come across a lot of people who have said that people shouldn’t support politicians who don’t “believe in science.” That phrase always amuses me.

To believe is to accept that something is true. The scientific method is the antithesis of belief. Instead of accepting something as true, the scientific method postulates that all hypotheses be tested through experimentation. If experimentation doesn’t prove a hypothesis false, then there is some evidence to support it. But even then the hypothesis isn’t assumed to be true, it merely hasn’t been proven false. If a hypothesis hasn’t been proven false, the scientific method demands that further experimentation be performed. After rigorous experimentation a hypothesis may graduate to a scientific theory but even then it isn’t assumed to be true. A scientific theory is merely an explanation for observations in the natural world that has been repeatedly tested and verified. At any point in the future an experiment could show that the explanation isn’t correct.

One should not believe in the scientific method. One should treat the scientific method as a scientific theory, a tool that has proven useful through use but not necessarily the only useful tool. One should not believe what scientists have published. One should seek to recreate the results published by scientists. In other words, to truly subscribe to the scientific method one must be skeptical about all things, even the scientific method.

Written by Christopher Burg

September 18th, 2018 at 10:30 am

Posted in Science

Tagged with

Free Research

with 2 comments

I’m beginning to think that Elon Musk posts seemingly zany shit on Twitter in order to trick people into studying his problems for him for free:

SpaceX CEO Elon Musk attracted a bit of attention when he suggested that we could get there simply by nuking Mars’ poles, liberating the ice (both water and carbon dioxide ices) into the atmosphere. When asked about the prospects for the plan, a scientist said, “Whether it would really work, I don’t think anyone has worked up the physics in enough detail to say it would.” Now, a couple of planetary scientists have accepted the challenge of working up the physics, and they have bad news for Musk.

I imagine Musk sitting at home and saying to himself, “I wonder if we could nuke that water on Mars to release it into the atmosphere?” As he sits there pondering the question he realizes that he doesn’t have the physics or chemistry knowledge to figure out whether that plan is feasible. After mentally going over the physicists and chemists he does have in house he decides that they’re working on more valuable research at the moment. Finally he decides that he can just get other people to research the problem for free, logs onto Twitter, and posts that he wants to nuke the water on Mars. A few minutes later a team of curious physicists and chemists decide to run the numbers then, realizing that Musk’s idea isn’t feasible, rush to social media to say, “See? See? Mr. Billionaire is wrong!” After seeing the report Musk leans back in his chair, sips his scotch, and smirks at the thought that he has received the answer to his question without spending even a single dime.

Written by Christopher Burg

August 1st, 2018 at 10:00 am

It’s Like Bureaucrats Aren’t Medical Experts

without comments

Lysergic acid diethylamide (LSD), 3,4-Methylenedioxymethamphetamine (ecstacy), and psilocybin (magic mushrooms) are all categorized as Schedule I drugs, which means have no recognized medical use and dangerous to use even under medical supervision. However, as with cannabis, the scheduling of these drugs is being called into question because research is showing that they show a great deal of promise as medical treatments that are safe to use under medical supervision:

Psychedelic drugs like LSD and ecstasy ingredient MDMA have been shown to stimulate the growth of new branches and connections between brain cells which could help address conditions like depression and addiction.

Researchers in California have demonstrated these substances, banned as illicit drugs in many countries, are capable of rewiring parts of the brain in a way that lasts well beyond the drugs’ effects.

This means psychedelics could be the “next generation” of treatments for mental health disorders which could be more effective and safer than existing options, the study’s authors from the University of California.

It’s almost as if the Drug Enforcement Agency (DEA) and the Department of Health and Human Services (HHS), the departments tasked with deciding what drugs fall under what schedule, are composed primarily of bureaucrats who have little or no experience in experimental medicine.

Mind you, this groundbreaking research isn’t groundbreaking. Timothy Leary, a clinical psychologist, experimented with LSD and found that it had many promising medical uses. When he performed his initial experiments, LSD was legal. Experimentation, at least of the legal variety that can be published in journals, became a huge pain in the ass when the drug was listed as a Schedule I. Fortunately, scientists have become more willing to jump through the hoops required to experiment with Schedule I substances, which is why research is now rediscovering the potential medical benefits of LSD and other Schedule I substances. Unfortunately, just because medical scientists have demonstrated that a Schedule I substance actually has potential medical uses doesn’t mean that the bureaucrats in the DEA and HHS are going to change the substance’s scheduling. We know this because cannabis, which has been shown to have numerous medical uses and be perfectly safe to use, still remains a Schedule I substance.

Written by Christopher Burg

July 11th, 2018 at 10:30 am

The Science is Settled… Until It’s Not

without comments

I’m a skeptical man by nature but I tend to be more skeptical of what are traditionally labeled soft sciences such as psychology and sociology. My stronger than average skepticism stems from several factors.

First, and probably most importantly, experiments in these fields can’t isolate variables. When you’re experimenting on humans, one variable is the life experiences of the subjects of your experiment. Different people have different life experiences, which can lead them to act differently under the same circumstances.

Second, the subject of experiments in fields like psychology tend to act differently when they’re the subject of an experiment. This tendency isn’t unique to humans. Ravens and chimpanzees act differently when they know that they’re being watched.

Third, most experiments involving human subjects suffer from selection bias. Professors have a ready pool of humans to experiment on, western undergrads, and utilize them for most experiments. Anybody with even the most basic observation skills will notice that undergrad students tend to behave differently than, say, elderly individuals.

Now I have a fourth reason for my skepticism. It turns out that the findings of many psychological experiments are, to put it nicely, rather dubious:

The Zimbardo prison experiment is not the only classic study that has been recently scrutinized, reevaluated, or outright exposed as a fraud. Recently, science journalist Gina Perry found that the infamous “Robbers Cave“ experiment in the 1950s — in which young boys at summer camp were essentially manipulated into joining warring factions — was a do-over from a failed previous version of an experiment, which the scientists never mentioned in an academic paper. That’s a glaring omission. It’s wrong to throw out data that refutes your hypothesis and only publicize data that supports it.

Perry has also revealed inconsistencies in another major early work in psychology: the Milgram electroshock test, in which participants were told by an authority figure to deliver seemingly lethal doses of electricity to an unseen hapless soul. Her investigations show some evidence of researchers going off the study script and possibly coercing participants to deliver the desired results. (Somewhat ironically, the new revelations about the prison experiment also show the power an authority figure — in this case Zimbardo himself and his “warden” — has in manipulating others to be cruel.)

The problem of manipulation isn’t unique amongst so-called soft sciences. The scientific method generally assumes that the experimenter is unbiased but what happens when the experimenter wants a specific outcome? Oftentimes, they can setup the experiment or manipulate the results in such a way that they can create their desired outcome. This is especially easily to do when the subjects of an experiment are manipulable humans. A little coercion can result in desired behavior.

I’m happy that these issues are finally being scrutinized more thoroughly. But I’m curious what the fallout will be. Science has become a religion to many people. People tend to react negatively when they learn that their priests have been lying to them and that their gods are not actually gods. Part of my worries that the backlash of this scrutiny could be a reflexive opposition to science by the masses but then the other part of me remembers that most fans of science aren’t actually scientifically minded anyways.

Written by Christopher Burg

June 15th, 2018 at 11:00 am

It’s Scientifically Proven

without comments

I find myself ranting more and more about modern practices in scientific communities. I don’t do this because I think science is a bad thing. The scientific method, after all, is just a tool and tools lack morality. I do this because scientism, treating science as a religion, has increasingly replaced science. It seems that many people have forgotten that science also requires a healthy dose of skepticism. Without skepticism, one can publish any old paper and people will believe its findings without question. This is rather worrisome when there are so many ways for bad or at least questionable science to get published:

This has huge implications. Evidence based medicine is completely worthless if the evidence base is false or corrupted. It’s like building a wooden house knowing the wood is termite infested. What caused this sorry state of affairs? Well, Dr. Relman another former editor in chief of the NEJM said this in 2002

“The medical profession is being bought by the pharmaceutical industry, not only in terms of the practice of medicine, but also in terms of teaching and research. The academic institutions of this country are allowing themselves to be the paid agents of the pharmaceutical industry. I think it’s disgraceful”

This article discusses a great deal of corruption in the scientific medical community. It turns out that much of the medical science that we take for granted is tainted. One of the most interesting forms of chicanery, at least in my opinion, is selective publishing:

Selective Publication — Negative trials (those that show no benefit for the drugs) are likely to be suppressed. For example, in the case of antidepressants, 36/37 studies that were favourable to drugs were published. But of the studies not favorable to drugs, a paltry 3/36 were published. Selective publication of positive (for the drug company) results means that a review of the literature would suggest that 94% of studies favor drugs where in truth, only 51% were actually positive.

End users, like doctors, often go by published studies. If 94 percent of published studies indicate that a drug is effective, doctors are more likely to prescribe that drug. However, if the 94 percent only exists because the large number of studies that indicated that the drug was ineffective weren’t published, the end user is often unaware. Moreover, if they are aware, they generally don’t know why the studies showing the drug to be ineffective weren’t published. Was it due to methodological failures on the part of the individuals performing the study or was it because an executive for the drug manufacturer is also on the board that decides what does and doesn’t get published? And to make matters even more difficult, just because a study was published doesn’t necessarily mean that the findings in the study are reproducible. The findings of many studies cannot be reproduced.

This wouldn’t be as big of a problem if so many people didn’t treat published research as holy scripture. But a lot of people do. Like a Christian who flips through the Bible searching for a line that supports their agenda, many people today will search for scientific papers that support their agenda. When they find it, they will throw it down as a trump card and act as if their agenda is unassailable because it’s “backed by science.” But is their agenda backed by science? Are the findings in the paper they threw down reproducible? Were several studies refuting the study they threw down rejected from publication by somebody who shares their agenda? There really is no way for you to know.

Written by Christopher Burg

April 13th, 2018 at 11:00 am

Posted in Science

The Scientific Method Doesn’t Prove Truth

with 2 comments

Yesterday I ranted about the tendency of individuals to use unspecific and subjective statements in political discourse. Today I want to rant about a similar tendency, the tendency of individuals to claim that something is scientifically proven (with the implication being that it has been scientifically proven true).

The scientific model involves a continuous cycle of making observations, thinking of interesting questions, formulating hypotheses, developing testable predictions, testing those predictions, and modifying the hypotheses based on the test results. If a test demonstrates that a hypothesis is false, the hypothesis can either be rejected or modified so that the cycle can continue.

The important thing to know about this cycle is that it never proves truth. A hypothesis might continue to be treated as true so long as no experiment shows that it’s false. But just because a lot of experiments have failed to show that a hypothesis is false, doesn’t prove that the hypothesis is true. A hypothesis might survive a million tests but that doesn’t mean it has been proven true. The 1,000,001st test could demonstrate that the hypothesis is incorrect, in which case it might be rejected entirely or modified based on the new information learned from the test and subjected to more tests.

Saying that something has been scientifically proven (true) doesn’t mean that that thing is true. It means that it hasn’t yet been proven false. While the difference between the two statements may appear to be subtle, it is important. The first statement makes a position appear unassailable, which is probably why so many people like to claim that their position is based on scientific truth. The second statement acknowledges the possibility that the basis of the position could be incorrect, which leaves the door open to changing positions based on new knowledge.

Written by Christopher Burg

March 30th, 2018 at 10:30 am

Posted in Science

Tagged with

Free Akkadian Dictionary

without comments

It probably won’t surprise anybody to find out that I’m a language nerd. Although I’m only fluent in English at this point and have a decent understanding of both Esperanto and Latin, I love to learn about all of the different mechanisms that humans have developed to communicate with one another. I especially love learning about ancient languages. Earlier this year I read a book on cuneiform, the earliest known writing system, and was fascinated by how the systems worked (it’s a real hodgepodge compared to the written alphabet we use for English today).

For the last 90 years scholars at the University of Chicago have been compiling an Akkadian dictionary. That near century of effort has finally bore fruit. The University of Chicago has released its 21 volume Akkadian dictionary and best of all the PDFs are free (buying the physical volumes will set you back over $1,000). If you have any interest in learning about Akkadian, head over to the University of Chicago’s website and start downloading all of the volumes.

Written by Christopher Burg

August 31st, 2017 at 10:00 am

Posted in Science

Tagged with ,

Lies, Damned Lies, and Statistics

without comments

Many people like to divide science into hard and soft. Hard sciences are the ones where you can directly apply the scientific method whereas soft sciences don’t lend themselves well to the scientific method. For example, physics is generally considered a hard science since you can replicate the results of previous experiments with new experiments. Sociology, on the other hand, doesn’t lend itself well to the scientific method because the results of previous experiments often can’t be replicated by new experiments. As if to acknowledge that fact sociologists tend to rely heavily on statistics.

In our modern world where science is the new god you can’t make an argument without somebody demanding to see your scientific evidence. While such demands make perfect sense in debates about, say, physics, they don’t make much sense when it comes to social issues because you can create statistics that prove whatever you want. Case in point, a research project found that one in every 24 kids in the United States has witnessed a shooting. However, the statistic was created through a survey with a question worded in such a way to guarantee a predetermined result:

It all started in 2015, when University of New Hampshire sociology professor David Finkelhor and two colleagues published a study called “Prevalence of Childhood Exposure to Violence, Crime, and Abuse.” They gathered data by conducting phone interviews with parents and kids around the country.

The Finkelhor study included a table showing the percentage of kids “witnessing or having indirect exposure” to different kinds of violence in the past year. The figure under “exposure to shooting” was 4 percent.

[…]

According to Finkelhor, the actual question the researchers asked was, “At any time in (your child’s/your) life, (was your child/were you) in any place in real life where (he/she/you) could see or hear people being shot, bombs going off, or street riots?”

So the question was about much more than just shootings. But you never would have known from looking at the table.

That survey was then picked up by the Center for Disease Control (CDC( and the University of Texas (UT) who further twisted the research:

Earlier this month, researchers from the CDC and the University of Texas published a nationwide study of gun violence in the journal Pediatrics. They reported that, on average, 7,100 children under 18 were shot each year from 2012 to 2014, and that about 1,300 a year died. No one has questioned those stats.

This is how statistics is often used to create a predetermined result. First a statistic is created, oftentimes via a survey. The first problem with this methodology is that surveys rely on answers given from individuals and there is no way to know whether or not the people being surveyed are being truthful. The second problem is that survey questions can be worded in such a way as to all but guarantee a desired result. Once the results from the survey have been published then other researchers often take them and use them inappropriately to make whatever point they want, which is what happened in the case of the CDC and UT. Finally, you have a bunch of people making arguments based on those questionable statistics used erroneously by organizations that share their agenda.

Written by Christopher Burg

July 5th, 2017 at 11:00 am

On an Editorial Board, Nobody Knows You’re a Dog

without comments

“Where’s your peer reviewed paper,” is a question many people instinctively ask when you present an idea that conflicts with one of their beliefs. The idea of requiring scientific peers to review research papers before they are considered scientifically sound is a good one. However, peer reviews are only as good as the people reviewing them. Many “scientific” journals exist not to verify scientific vigor but to prey on gullible researchers who are often new to their field. When such journals review a scientific paper you don’t know if the review was done by a human being or a dog:

Ollie’s owner, Mike Daube, is a professor of health policy at Australia’s Curtin University. He initially signed his dog up for the positions as a joke, with credentials such as an affiliation at the Subiaco College of Veterinary Science. But soon, he told Perth Now in a video, he realized it was a chance to show just how predatory some journals can be.

“Every academic gets several of these emails a day, from sham journals,” he said. “They’re trying to take advantage of gullible younger academics, gullible researchers” who want more publications to add to their CVs. These journals may look prestigious, but they charge researchers to publish and don’t check credentials or peer review articles. And this is precisely how a dog could make it onto their editorial boards.

The peer review process, like many things surrounding the scientific method, is often poorly understood by laymen. To those who have hoisted science onto a religious pedestal the words “peer review” are more of a magical incantation that makes the words that follow infallible. To those who understand the scientific method the words “peer review” means that the credentials of the peers need to be verified before their review is given any weight.

There are a lot of scam artists out there, even in scientific fields. Don’t trust research just because it was peer reviewed. Try to find out whether the peers who reviewed the research are likely knowledgeable about the subject or are really just a bunch of dogs.

Written by Christopher Burg

May 31st, 2017 at 10:30 am

It’s Science!

without comments

Reason posted an article claiming that research shows that you can’t even pay somebody to read information that contradicts their beliefs. However, if you read the about the methodology you learn that the researchers didn’t offer to pay people to read information that contradicted their beliefs:

The study gave participants two options: they could read an article about same-sex marriage that matched their own perspective, or they could read an article about same-sex marriage that contradicted their views on the subject. They were told that if they selected the article with which they disagreed, they would be entered in a drawing to win $10. But if they selected the more comforting, self-affirming article, they would only stand to win $7.

Being entered into a lottery isn’t payment, it’s a chance at payment.

I bring this article up to illustrate how poor research can quickly lead to stupid conclusions and headlines. Initially reading the research might lead one to believe that it gives evidence to the possibility that some people won’t read contradicting information even if there is a reward. But when you stop to think about the methodology used you quickly realize that the research was inadequate at addressing incentive. Some people might not be willing to read contradicting information for a chance to be entered in a lottery with a slightly better payoff but they might be willing to do so for straight up cash. $10 might not convince some people to read contradicting information but $20 or $30 might.

I also bring this article up because it shows that neocons and neoliberals aren’t the only people who allow themselves to use poor research to reach a desired conclusion. Libertarians can and do fall into that trap as well.

Written by Christopher Burg

May 17th, 2017 at 11:00 am