My series on cheating is about very disturbing trends in science. These posts will draw a gloomy picture about modern science. And I am not talking from the outside like philosophers in so called 'science studies', who mostly would not look at a test tube if you hit them over the head with one.

I am still personally involved (!), and this in turn implies that I cannot present you with anything but the mere tip of an iceberg, otherwise I would kill my own career. I have already almost no career, because I blew the whistle when doing my PhD. Big mistake – stupid me still believed in the rationality and morality of scientists; I fell for their masks which give you the impression that at heart they want to defend scientific integrity if called upon.

I should have quietly gone along, record it all, and make myself at least a position as a principal investigator and then blow the whistle (such I could have achieved more than ten years ago if I had stayed on in Britain as I was urged to do, just slowly boot licking up the academic hierarchy, and I would have achieved it by now also if I just had gone along with publishing BS at USC, too).

Anyways, what does this all mean for you, dear reader? It means that what I tell you here is veiled, and I cannot, will not lift the veil fully for many years to come.

Today, I will tell you another little trick with which you can produce the results you need and construct your career. Again: this is not a Jan Hendrik Schoen type trick; this is out of the tool box which scientists use all the time. In Nanotechnology, average particle sizes are important: Small size is the penis length of nanotechnology. And so, if for example scientists working in nanotechnology look at a surface in order to establish the average domain size, they often count as follows (nm stands for nano meter = 10-9 meter):

surface view

This surface has 4 small domains with only an edge length of d = 1 nm. Then there are N2 nm = 3 of those with d = 3 nm. Thus the average edge length <d> is:

<d> = [(4)(1 nm) + (3)(2 nm)] / (4 + 3) = (10/7) nm = 1.43 nm


Let me first tell you why this is "good" and then why this is wrong. It is good for two reasons: This method results in a small average size <d>. You count the smaller ones equally, although they make up less area and less volume in the sample internally (after all, most properties depend on how much you have, not how many). The area specific average <d>A is longer:


A1 nm = 4 (1 nm)2 = 4 nm2    A2 nm = 3 (2 nm)2 = 12 nm2

<d>A= [(4 nm2)(1 nm) + (12 nm2)(2 nm)] / (4 nm2 +12 nm2) = (7/4) nm = 1.75 nm


1.75 is bigger than 1.43, and remember, nanotechnology aims for selling small stuff. Moreover and more importantly: The experiment is the same, the data are the data, and so the average particle size is sold as data, as experimental evidence (not as theoretical model). However, whether you publish <d> or <d>A in order to support whatever model you want to support, or confirm whatever reference you like to conform to. Do not expect peer review to be a regulator here. Nobody has time to review such details – they need to publish their own papers!


So why is <d> actually wrong? The area looked at is a random cut through the sample. <d> and <d>A should be the same if you had cut the sample about 1 nm below the surface where you happened to cut it. So the sample is somewhat like the following picture (of course all schematic, things are more random, these issues can all be made rigorous by integrating probability density functions):

three dim view

The top surface of this sample is picture number one above, but the bottom surface or any cut at a different height give the same statistics. If you need to have a slightly different average particle size in order to support whatever it is you aim to support, you can count inside the volume:

N1nm = 8    N2nm = 3

<d> = [(8)(1 nm) + (3)(2 nm)]/(8 + 3) = (14/11) nm

Again: The sample is the same, and your scientific paper is based strictly on data, but here is yet a third value. It is even smaller, but less convenient to calculate. Since the first time around <d> was different from now, you also know that calculating <d> in this way is wrong both times.


Let us see what a volume specific average <d>V does:

V1nm = 8 (1 nm)3 = 8 nm3    V2nm = 3 (2 nm)3 = 24 nm3

<d>V = [(8 nm3)(1 nm) + (24 nm3)(2 nm)]/(8 nm3 + 24 nm3) = (7/4) nm

The area specific and the volume specific averages agree; they are both 7/4 nm. This is science!


BTW, in case you think the differences between the formulas are small: Given the right formulas with critical sizes (a bunch of tricks I may introduce another time), a small difference can make a huge difference, if so desired.

What is basically done in science all over (I have been active member in string theory, helium nanotechnology, neuroscience, cosmology, and other sub-fields) is to use a bunch of such tricks. This is just one harmless one out of our tool box; error calculation is a much more powerful one that can moreover never be tracked by reviewers or people who want to reproduce results. Who wants to reproduce anyways? You need to come up with novel stuff, not reproductions.


People cover their tracks with references and “good reasons”. There is so much written nowadays, you will find high impact factor journal references to argue for any shit you can imagine. And if anybody should ever doubt any details, you can always find a “good reason” for why you employed a certain method.


Also, before science believers in a knee-jerk reaction comment that a critic of such methods would be published: No, I tried that. If you are not already famous (in which case you have your own dirt to hide), the cliques you criticize are the peer reviewers and editors. It is career suicide. I have gotten only one little critique published, and only after hiding the message and by additional strange circumstances (cannot, will not tell), and it is collectively ignored in the very field that it aims at. Moreover, trying to publish it in the journals that it should be published in (many years, no success) has helped to destroy my career in the helium cluster community. Anyways, the message should be clear:


Exact science is not like the softy stuff of those lousy social scientists. Real science is always based on data and always comes with proper error calculation. And as long as you buy that, … well go think about it!