In the Trisolaran headquarters, four light years away, Tri-General Miao receives first reports from the Earth taskforce.
– General, this is tri-colonel Bao. The first sophon has reached the Earth. We are ready to begin the program of interference with their research activities. We will use the sophon to drive their top scientists crazy and prevent them from making the discoveries they are working on.
– General, I am tri-lieutenant Zuo. Perhaps, first you would like to hear the report about what they are working on there on Earth?
– Hmm… Hmm… Sure that sounds interesting. How close are they to the breakthroughs that will endanger our superiority?
– Okay. So there is a group of their elite minds, who are busy with the problem of achieving conducting materials without any heating loss, in their ambient condition. They call this ‘room-temperature super-conductivity’.
(A lot of raised ocular framing organs around the room)
– That’s right. They are thinking this would somehow solve their energy woes. But from the looks of it most of what they do is just scream at each other.
– Lieutenant, this must be an anomaly, tell us what other research they have going!
– Certainly, Sir. For example, they are obsessed with finding something they call ‘non-abelian Anyons’, for which they would need to pack in one or two dimensions, not expand into 11 dimensions, and they need to reach very low, completely useless energies. They do it just because they wrote down mathematical equations that tell them this should be possible.
(Telepathic chuckling is transmitted throughout the room)
– Most bizarre! And for what possible benefit might they be looking for these ‘anyons’?
– Oh, this is the craziest part. Quite a lot of them are busy with building what they call a ‘quantum computer’. They think that by using entanglement between particles they will solve computational problems a lot faster!
– HAHAHAHAHAHHAAH hold on – a quantum computer? HAHAHAHA I just can’t HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHHAHHHAAAAAA
– Now, wait let me catch my breath. Okay, now seriously. Are they at least sharing data with each other so that they could understand that none of this will ever work?
– No Sir, they are not.
– Okay, tri-colonel – abort the sophon interference program.
– Yes Sir! Why did we even send the damn thing there? I am sorry for wasted resources. What should be my punishment?
– Well, in principle you deserve to die. But I have an even worse idea for you: you will create a “science journal”, and you will write down and publish all of their silly projects.
The next generation major scientific project will streamline numerous discoveries
Most people imagine scientists as walking around their labs wearing white coats, looking through microscopes and shaking beakers of fuming chemicals. But some of the research projects are much larger in scale. Take CERN, for example. It is an underground ring 30 miles in diameter, next to Lake Geneva, which is used for accelerating and smashing elementary particles together at vast speeds in an effort to crack them apart. There is also LIGO, which uses large mirrors separated by miles to detect gravitational waves. There are big telescopes and projects such as the ISS (International Space Station) for which several countries had to team up.
AI Rendition of the future Feynman Tunnel (DALL E)
These and other colossal experiments employ thousands of scientists who are trying to answer humanity’s biggest questions related to the origins of the Universe and the nature of matter, even seeking other civilizations. Andrew Longerthought, a philosopher of science from the University of Upper Lowlands, says “In the most existential sense, this gifts us our purpose – as long as we are trying to find the answers, we are truly alive”.
Perhaps for this reason, and also the prospect of staying busy for decades to come, scientists welcomed with exhilaration the news that The House passed into law the $1.2 Bn ‘2024 Feynman Tunnel Act.’ The Feynman tunnel, if the Senate agrees, the President signs, and if funds are subsequently appropriated, will be the next big project, “possibly the most important scientific effort of the XXI century” according to an NIT physicist Hersh Inidics.
Richard Feynman, the namesake of the project, was a Nobel prize winning theoretical physicist, celebrated for discoveries in quantum electrodynamics and superfluidity. He was involved in the Manhattan project and proposed the so-called ‘Feynman diagrams’ – a short-hand pictorial language for writing down complex equations. But scientists know him best not for all this historically relevant work, which is however difficult to understand, but for his famous quote:
“The first principle is not to fool yourself – and you are the easiest person to fool.”
While only a rare scientist can read, let alone write down, even the simplest Feynman diagram, all of them know this quote, and often recite it to each other, with pride. It is this principle that belies the foundation of the Feynman tunnel. The idea that the project is aiming to test is that tunnel vision, often associated with “fooling yourself” can greatly increase the efficiency of any scientist, no matter a physicist, a chemist, a marine biologist or a sociologist. By focusing attention on “the prize,” and not being distracted by lesser explanations, minor or even major errors in research methodology, leaps of logic and other distractions, scientists should be able to attain their discovery goals at never-before-seen rates.
The tunnel is planned as a physical tunnel which reaches extreme lengths and depths and crosses over into a metaphysical tunnel. In the physical world, it will connect Bell labs in New Jersey with Delft Institute of Technology in the Netherlands, crossing through Cambridge, MA and the Moon. Construction will take 10 years, but the metaphysical component already exists and only needs to be “linked up,” according to the journal Nature of Things [citation needed].
“We expect that scientific output will quadruple once the tunnel is fully operational” says Henk Schoenenfrauden, former PhD, the spokesman for the project. Publishing infrastructure needs to be updated, with several thousand new journals planned to start accepting submissions in anticipation of the launch date in 2034. “This will be a revolution not seen since the printing press was first used to slice bread,” added Longerthought.
A Scientist walking down the Garden Path section of the tunnel (DALL E)
What I learned from asking people about their data, or talking about data sharing
In the past 4 years I have asked a number of people to share their data with me. Mostly to check the accuracy of their claims. But several times it was to further our own research on similar topics and to perform our own additional analysis. To be clear I am talking about data underlying publications, either in traditional journals or on arXiv. I have also spoken about sharing data broadly, in fact I do so in every one of my public talks. The conversations about data rarely surprise me, in fact they follow one of very few (3-4) trajectories. So I decided to write up a summary of the arguments I use for each type of response I get.
I am a condensed matter physicist, and the first thing I can tell you is that a majority of people in my community, especially the students and postdocs, understand the importance of sharing data. At the same time, there is a fraction of us whose attitude towards data sharing is … uninitiated. More often than I like I find myself in a situation where I broach the topic with a person who claims they are being asked about this for the first time in their adult life. My impression from other academic communities, not too far afield – for instance astrophysics – is that they share their data much more readily, and even foster a culture of doing so. There is nothing unique about my physics community, and there should be nothing culturally exceptional in how they react to data requests. But because the subject is just something that we do not promote a discussion about, people come up with fairly naive arguments for why they are hesitant to share their data. I imagine people had many silly arguments in the past about why they should not use soap or brush their teeth.
My main argument for why you should share your data is that without data available, your report, paper or claim you make in a presentation have no real value. A paper is just an illustrated essay, and a talk you give is a performance. Your actual primary product is your data, and any analysis steps you undertook starting with your data. This is what the secondary products like publications draw their worth from. By sharing data, you allow for verifiability, for un-skewing of your conclusions, for further analysis and reuse.
Why you should not share
That being said, there is one really good reason for you not to share data. It is if your data have never been there, if you made it up, if you manipulated it, cherry picked your data in extreme ways, if you don’t have enough data to justify your claims, if you know of major errors in your analysis and are not willing to address them. If any of these are the case, then indeed – you are better off resisting requests to see your data. The sad situation is that at the moment, there are no really good mechanisms to compel you to release your data. This may change in the near future, with new government agency regulation coming in effect. Who knows, perhaps more publishers will follow the example of PNAS and start retracting papers for which data requests are denied.
There are also cultural changes happening. Already now, if it becomes known that you are not willing to share your data, you can bet that many people will think that you have something to hide, and that your results are not reliable in some way. I certainly do. So the tradeoff when you refuse to share data will be: you may stay out of tangible trouble, such as retractions and research integrity investigations, but you will also build up suspicion around your work. Keep in mind that refusal to share data will increasingly become its own research integrity concern, and it already is in some countries, such as the Netherlands and Denmark.
“They will twist my results and make them look bad”
One kind of reaction to a request to share data is that the original author does not trust you. They claim you are out to get them, catch them out on some inconsistency that is either made out of thin air, or has a simple explanation. No work is perfect and there is always something to nitpick. The idea is that people ask you for your data to use it in an unjust and personal attack against you or your colleagues.
I cannot say this never happens, but certainly the frequency at which people claim injustice towards themselves is too high. More often than not, the data request is a form of scientific challenge that the receiver chooses to interpret as a personal attack – perhaps assisted by the request not being sufficiently polite. Sometimes, the data are in fact bad and the author is worried that this will be exposed. Other times the author is caught off guard by an unusual request that they never had to deal with due to others mostly trusting their claims. Their reaction is to suspect that you have a personal agenda.
But even if you are concerned that you are under attack from an actual troll that is out to get you in the most unfair way possible, you should still share your data. If your work is solid, then the data will speak for itself. It is very hard to make up an untrue critique. If someone writes an untrue comment criticizing your work, you will be able to explain what they got wrong. If they find a minor error, you should be happy and grateful. If they find a major problem, you should be even more grateful, though it is also fine to feel somewhat embarrassed.
Besides, the current system has layers upon layers of protection for the claim makers, and obstacle after obstacle for people challenging somebody else’s work. In my opinion the system is off balance and should be modernized. But the way things stand it is very hard to take down a published study even if the takedown is justified, and impossible to do so if the study is valid. There are simply no examples of that happening in our field, and certainly not enough to justify withholding data that can validate published claims.
“Other people published similar claims, so no need to see our data”
This is just silly. Where to begin? If your work is reliable and reproducible, and this has already been established extensively by others, then what is your worry about sharing your data? It will only make your results more verifiable, more useful. If people are already following up on your work in their own labs, then giving them more information will only enhance the impact of your own findings.
On the other hand, there is a well-known phenomenon called ‘confirmation bias’. This is when you make a claim, and others take it as a cue that this is what their results should look like. Sometimes they already had similar looking data, but did not think much of it. Yet after reading your paper they decide that they can make their own claim that mirrors yours. In one example, the authors found some “quantized plateaus” following a paper that was later retracted. There are also examples of how people jumped the gun, and claimed a discovery that was physically plausible, but that was only made for real later.
More generally, if one group reaches a finding following another group’s lead, it does not mean that the first group was correct, or completely correct. Experiments are never identical. Sometimes different techniques are used to explore similar questions, different analysis is performed, different materials are used. So a confirmation is rarely very clean. A variation on this theme comes when your own work reproduces your earlier work, and the data request is about the first one. If you have not shared data from either study, then both are equally unconfirmable. Each of your works just needs to be able to stand on its own, and not rely on subsequent works by yourself or others.
“Our data are not presentable”
Some react to data requests with mild shame because they feel that their data is not organized well enough for others to see it. Or perhaps it is in an archaic format, not properly described. A related issue is if they were not keeping track of their data, and are not sure that it is still there. I even heard an argument that data could not be shared to protect student privacy, because they are making personal notes in the lab journals, writing down things that reveal gaps in their understanding. You imagine others looking at your materials and judging you, like people often do with other people’s code.
To that I say: share it anyway! I agree we should all put more effort into making data “findable, accessible, fairable, …”. But the truth is, most of our data is not organized well enough. So we are all in the same boat. Despite this, I direct my group members to share all their data not waiting for requests, but at the time of arXiv publication. I do plan to develop better data organizing standards, but in the meantime, I am fairly certain that our data is not particularly embarrassing to share, because it is typical. There is a much greater benefit to making it public than if I were hiding it until we find time to reorganize it. In fact, the one repository we were trying to arrange very neatly remains unpublished for several years… If someone gets interested in our data, we will work with them to help them read it, plot it and answer any of their questions. This already happened a couple of times with the data we published.
“I don’t know what to share, there is too much data!”
The first thing I would ask you back is – how much is too much? If it is less than 50 Gb per experiment, then it will all fit into one single Zenodo record, already now. And the file limits will likely grow in the future. The days when arXiv would only give you a couple megabytes per record are long gone! You could simply share your entire data. And this is great because you do not need to curate it at all, so you spend zero time figuring out what you should share.
If you have substantially more data than tens of gigabytes per project, which could be the case with some synchrotron measurements, or numerical simulations – then you could share the code you used to process your data down to make your paper figures, along with some examples of the original data. See this article about how CERN does it, they have way too much data to even store it all, so they publish their process.
“No one will be able to follow all my data!”
Chances are, you and your collaborators already did some curation as you were sorting through the data, so you could add those summaries, writeups, powerpoint presentations to your record, and those will be an excellent map to how you did your work and what you found the most interesting. You surely have records of what experiments you ran that you took the data on, that you used to keep track. Share these as well.
“Some of my data is garbage anyway”
A related concern I hear is that some of the data is calibration data, “garbage data” that nobody should be looking at, and it can be simply ruled out as anything useful. Here you should be careful. If by garbage data you mean measurements with the cable unplugged, or a broken STM tip, I agree – no need to share it. But if these are measurements from a sample that appears to be alright, and they simply do not show the effect you were looking for, or it looks worse than in the sample you deem your best – then you cannot justify removing those data. They are very valuable for your peers to see. It would give them a proper impression of your full study, and help overcome the very common positive bias in your analysis. I say this as a person who spent years and directed students and postdocs to follow-up on the work of others which turned out to be a wild goose chase because the claims were based on a single working nanowire, and most of the experiments failed but that was never reported.
“I am still publishing more analysis on the same data”
This goes to the worry of being scooped if you share too many details of your work. Somebody else may even perform the analysis of your own data that you were planning on doing, ahead of you. Personally I am not bothered by this, and I would even be thrilled if it happened to our work. It is why I insist on sharing our data: I would like there to be more and better research on my topic and building on my group’s work.
But if you have definite plans to publish a series of papers using the same set of data, and only share data when that is done, I would reluctantly agree that this is reasonable. You may also have administrative restrictions on data sharing, such as if you are filing a patent, or it is impacting national security – but these are very rare situations. In general, if you already published a paper or filed for a patent, this means you can safely share data. As a basic principle, I think that full data underlying any paper should be available as soon as that paper is published, starting with the publication on arXiv. Other researchers will start interacting with your work from that moment, and they should not wait for you to complete other studies, which can take years.
“I will share upon a reasonable request”
“Data available upon reasonable request” is really a meaningless sentence, in my opinion. Perhaps you added it to your paper automatically, as one of the default and obligatory phrases at the end of your manuscript. In this case, I encourage you to spend a few minutes and define for yourself, what for you is a reasonable request? Or, go from the opposite – what would be an unreasonable request? When I did this, my answer was – there is no such thing as an unreasonable request. Because all of my data from the recent papers is shared, there is no burden involved in sharing it, only in guiding people through the repositories. If you think the requests can be burdensome, this could be a wake-up call for you to put some effort into locating and structuring some of your data. For instance, if you are a PI and you don’t know where your students and postdocs keep the data, you should get on top of this.
You may be concerned that a request is frivolous. Perhaps you imagine some data scavenging researchers that just go around asking everyone for their data like bots that scavenge the internet. I don’t know if they exist, but I have never met one.
If you did this exercise and can define what a reasonable data request is, I would love to hear it! As long as it is not “I will know if it is reasonable or not when I see it” because this gives you full power to reject any data sharing request. And instead, you should just share your data. So if you do add the “reasonable request” statement to your paper, try to think through what would you do if and when a request comes? Which data will you share? How will you do it? How long will it take you?
When choosing a research group, ask about their attitudes towards research transparency and reproducibility
I am writing this for folks applying to graduate school (and, to a much lesser extent, the schools hoping to recruit them). In brief, I describe here how to judge a school not only by conventional criteria but by other things that matter even more. Chief among these is the research integrity track record and promise of any place where you make the effort to apply, and certainly before accepting an offer. My discussion will be focused on the US, where application deadlines are coming up in the next few weeks, but I suspect it will also be helpful for those elsewhere, and even scientists in, or transitioning into, industry positions.
Devil in a research lab, by Bing AI
For those in the US and Canada, where you apply to a university and not necessarily to a particular research group, the first piece of advice, which is not unique or new, is this. Figure out which groups you are interested in being part of. There should be at least one that works for you, or do not apply to that school. Don’t try to reverse engineer from “this place is famous, so there must be something I can do there”. Read up on the groups that interest you, and understand what they do. At the very minimum, if you describe your thoughts about them, it will make your application more compelling. But broadly speaking, you will be spending multiple years at the place and not at the university-at-large but in one single lab, so you can shape your future by studying up.
The low hanging fruits are to pick up some key words from the general snippets available on their websites. You may also google them or even check their social media, though the representation of senior scientists there is fairly patchy – they are old! It is common to check the list of publications. Many have a profile on Google scholar, where the first thing you get to see is their h-index and the number of citations, and because of how Google sorts the list you see at the top all the papers in prestigious journals, like N****e and S*****e. I understand and sympathize with the impulse to analyze this, at least as a starting point. You may feel that publications reflect the productivity of the group, because you don’t want to be stuck in a dead-end project, or because you want the best chance of landing a top position after graduate school. You may feel that this depends on what kind of papers you will be able to write. I cannot say this is not true today, though I hope this changes over time.
I encourage you to study deeper, however. I recognize that this takes extra time. And if you are applying to 10+ schools the effort can add up. Still, it is worth it, at least for those places that communicate with you, invite you for a meeting online, or a visit to campus. You can open some of the papers that sound interesting to you and read them. Try to understand what was done. Try to assess the evidence for yourself and glean the research style from how the paper is written. Does it sound confident and make strong claims? Is it nuanced with a lot of technical details? Some of this can be evaluated without having the background knowledge, especially if you look at papers from different groups side-by-side.
The reason I am suggesting this is because there is something important about the scientific process that is often misunderstood. People primarily experiencing science through their classroom education, such as undergraduate students, are used to working with textbooks. Books contain facts that are vetted and verified, and you are not exposed to the possibility of questioning them. Active research is different in that it is fluid – conclusions of any paper can be overturned, or evolve when new findings are made. This happens fairly often. But the full story is more complicated.
In fact, large swaths of science are either never attempted to be reproduced (for example because they are too specialized), or their reproduction fails. This is known as the ‘reproducibility crisis’. The awareness of this crisis by practicing scientists is not great, and many of them think that it does not concern their discipline or them personally. But in reality, challenges in reproducing results are related to how we do science, and not to which science we do. In many cases, it is human nature that pushes a scientist to make an irreproducible claim in their paper. Everyone wants to report a big discovery, and the person easiest to convince is often yourself!
Why is this not caught? Aren’t all published papers reviewed by peers? There are many reasons. Some of the key ones are these. Reviewers often do not have the time to check the entire work, they skip through the paper until they get an overall impression and compare this to their expectations. On top of this, most papers do not provide enough data to really scrutinize the claims – just 3-5 figures out of thousands of measurements! Finally, there are so many journals out there: you can shop your paper around until it gets accepted somewhere – even if the referees at the first journal found mistakes in your work. Remember, most referee reports are confidential, so editors at other journals don’t get to see them. Journal prestige does not make it immune to errors, in fact more famous journals attract more unreliable claims because publication in them is highly coveted. Many retractions happen at top journals.
Back to your review of the groups that have caught your interest and where you might want to work. When you are reading their papers, I recommend installing the Pubpeer plugin in your browser. Pubpeer is a platform used for collecting comments on papers post-publication. Many of the comments are critical. If a paper you are reading is mentioned on Pubpeer, the plugin will let you know. You will be able to read what other experts are saying about the paper. Pubpeer is not universally used by all, but this is one straightforward way to find out whose work is coming under scrutiny by peers. Another resource you can check is Retraction Watch, or the Retraction Watch Database – does the group you are interested in have papers there? Why were they retracted? Remember, a retraction does not automatically mean a research integrity violation. Sometimes honest mistakes are found, and that is okay. At other times, research integrity violations are mischaracterized as honest mistakes, so look for patterns, like more than one retraction or many criticisms of the same group.
If you get to visit your prospective graduate school, you may be able to ask about the climate in the group. People usually ask about working hours, social atmosphere, i.e. friendliness, treatment of women and minorities, expectations from a group member. These are all vitally important questions, and some research integrity issues arise from the work culture. For example, pressure to deliver results can trigger data fabrication.
I suggest that you also ask if there were any research integrity type situations. If anything significant happened in the past, somebody might tell you. But some of the incidents may be confidential. You can also ask related questions. Does the group share their data? If the answer is ‘no’, that is potentially a red flag, because this can create a situation where true findings are hidden. Where is the data stored? If there is not a system of preserving data, then there is no way of checking the findings, and thus no idea that research should be accountable. Many funding agencies require a data management plan, but is it actually followed in the lab? Do researchers share their code? Do they post their papers on a preprint server such as arXiv? Why do they make their choices?
You could also check out the university policies for evidence of transparency. But I cannot recommend evaluating a place based on their institutional stance towards research integrity, open science, reproducibility etc. Experience shows that if you go high enough in any organization, you will likely be disappointed. For instance, my own University just issued a policy towards data that states that “…Research Records shall be available only to those who need such access and to the minimum amount necessary”. Fortunately many faculty are significantly more transparent, releasing and sharing data for all our recent publications and many older ones.
On the flip side, even if an institution has a declared progressive pro-transparency policy, for instance in some European countries, they may not be enforcing it. Institutions tend to cover up misdeeds committed within their walls, because they think it helps them preserve their reputations, keep or win funding, or avoid penalties. I think this is misguided and by acting this way they are only hurting themselves. And of course they are hurting science, and the junior scientists involved.
If you end up in an actual research integrity situation at one point in your career, you may unfortunately find that you are on your own. The people in charge of investigating this at your school will act like they don’t believe you, they will take forever to reach conclusions, and you may experience retaliation. In this case I recommend reaching out to somebody outside with similar experience who may be able to help. I hope this changes, but it appears to be this way at the moment. Examples of universities not acting on clear allegations of misconduct are plenty. And in some cases, universities have to pay for their actions. But in others, nothing happens to the people involved.
So your best bet is to try to avoid a group where the research culture may create a situation like this. The person who can make the most difference is the principal investigator. And this is why I recommend trying to figure out the attitudes towards transparent, reproducible research in the laboratories you are interested in and letting that inform where you apply. Good luck!
Is the smoking gun evidence sufficient to make a scientific claim?
I took a break from posting, but I had a good excuse: I was finishing a manuscript that I am very excited about. The title is ‘Smoking gun’ signatures of topological milestones in trivial materials by measurement fine-tuning and data postselection, and you can already check it out on arXiv. I also had a chance to give a talk about this paper, so I decided to retell the talk in this post. The good thing is it has a lot of pictures.
The paper goes over four examples where my students and postdocs (Po Zhang, Bomin Zhang, Yifan Jiang and Seth Byard) found very dramatic patterns in their data in four experiments. The images are so distinct that taken in isolation they can be used to convince a fairly informed physicist that a breakthrough was made in the field of exotic superconductivity or topological quantum computing. However, as the title already reveals, all four experiments are cases of finetuning and data selection.
First, let me give my definition of a “smoking gun” when it comes to making a scientific discovery. It is a piece of data, a graph, that contains the full proof of the phenomenon, a single figure that tells the entire story. True smoking gun discoveries have been made. My favorite is an experiment that was done in my previous lab in Urbana-Champaign, where I did my PhD, but five years before I arrived there. This is work by Dave Wollman and Dale Van Harlingen that is known as the “YBCO corner SQUID experiment”. I copy it here from their 1995 Phys. Rev. Lett. paper:
The actual smoking gun in these data is the dip at zero magnetic field in panel (b). The white cube is the YBCO crystal. When a contact is made around the corner of the crystal, the critical current diffraction pattern exhibits destructive interference (dip), proving that YBCO and other cuprate high temperature superconductors have d-wave pairing symmetry. The full importance of this work is not yet known, because the mechanism of high temperature superconductivity in cuprates is not completely established. But the effect is very dramatic: not only it is a smoking gun for d-wave, it is also a “yes/no” test for this property. If the pattern had a peak in the middle, that would be not d-wave, but s-wave, a more conventional pairing symmetry.
This is a very appealing way of doing science. While no story is every so simple, scoring a “smoking gun” is clearly a win. This is like a hole-in-one, or a touchdown, knocking one out of the park (or use your favorite sports analogy here). I too wanted to find my own smoking gun. I came close with the initial Majorana paper. I remember many people telling me and other authors: “you really got it!”, it felt good. Here is what I wrote about it at the time in 2012, I was rather upbeat. Of course the doubts lingered and the initial signatures did not turn out to be Majorana. But the signal we reported was also fairly dramatic and the figure was self-contained: the zero-bias peak that appeared at finite magnetic field in a nanowire-superconductor device, just like the theory predicted it:
Really, what else could it be? We did not know of any alternatives then.
I should also discuss how we found this signal. More or less the way the child in the picture below is searching around for his favorite station on the old radio: we tuned around until it was there. We did not have any reference as to where we should be searching, the idea at the time was that if we see a pattern like this, and it passes a few secondary checks, it would be Majorana. In principle, the “looking around” approach is a valid method of discovery. Suppose you are trying to find signs of alien life. You don’t know which star you should focus on, so you survey the sky. It would also have to be rather fine-tuned.
The challenge, however, is in how rich the patterns can be when the samples are very small. Physics has moved to smaller and smaller objects, such that we deal with individual electrons, spins or photons. What comes to the forefront is the fluctuations. Small changes in the environment can influence the data a lot. And quantum interferences are like waves on the water, they can generate amazing images. My current preferred analogy is to the clouds in the sky:
You can be looking up and seeing a very clear rabbit, but give it a moment and it begins to deform and split up until it no longer looks like anything. This is what the data in small (mesoscopic) samples is like. If you take several snapshots at the right moment, under the right conditions, you can make a claim that you have observed a zoo of huge animals floating in the air.
This is what happened with Majorana. A couple of years after our first paper, another group from Grenoble found a similar zero-bias peak signal, but under conditions incompatible with Majorana. They found another explanation for it, in terms of interesting but less exciting Andreev bound states in quantum dots.
It turns out that the smoking-gun-like data, that is in reality something entirely different, is not that rare. In our new paper we went through four of our experiments and gathered smoking gun look-alikes. Our point is to bring attention to how this method of making a scientific claim – through demonstrating a smoking gun – has a major blind spot when it comes to data selection. This is important because a typical scientific paper is just an illustrated essay: it only includes 3-5 figures, so data selection is present in the design of the modern peer-review publishing.
But let me give you a quick run through our examples, and you can read the paper for more details. For 3 of the 4 examples we also wrote in-depth standalone papers, I link to them below.
Our first example is superconductivity that strengthens with magnetic field. Superconductivity is the zero-voltage state, and you can see in the figure that for higher fields B (orange and black curves) the zero-voltage region expands around the horizontal axis. This is quite uncommon. Magnetic field is the enemy of superconductivity. Because pretty much all (s-wave, and even d-wave) superconductivity originates from anti-aligned electron spins, but magnetic field wants to flip all the spins in its own direction, and this destroys superconductivity.
The counterintuitive behavior can lead you to think that spins in this sample, which happens to be a nanowire coated with a standard superconductor, are already co-aligned at zero field, and therefore the field does not disturb them but only makes them even more aligned. This would be known as “triplet” superconductivity which is a highly coveted state of matter. However, the figure I pasted falls apart if we tune the control knobs – gates g1 and g2 – much like if we tuned the radio and lost the station. The increase in supercurrent is accidental and does not signify an exotic state of matter. These data are from the work of Bomin Zhang. He has a separate paper on these devices, but we did not go into the details of increasing supercurrents there.
My second example is on the zero-bias peaks, which like I wrote already can mark the presence of Majorana modes. Here we find one such peak, which does a peculiar thing – as we vary the gate sg1 it does not change its height for a while. You can see in the linecut below how the top of the signal is kind of flat, if you ignore the fluctuations, it stays between the two “5%” dashes lines for a while.
The idea that you find in a couple of papers out there, including one of the retracted Delft papers, and another one by the same key author but now from Tsinghua, is that regions of zero-bias peak that possess the kind of flatness, or ‘plateau’, are suggestive of Majorana. In our work, this plateau-like appearance is fairly easy to obtain through fine-tuning, and it goes away just as easily. Yifan Jiang, my former student who graduated this year, has a separate paper about these devices where we also discuss the pseudo plateaus.
There is a prediction that in topological junctions, which contain Majorana modes, the voltages of Shapiro steps are doubled, which is the same as saying that every other step in the Shapiro staircase is gone. Well, we found exactly such missing odd-order Shapiro steps, marked by red arrows in panel (b). This got to be topological superconductivity, what else could it be?
Although in panel (c) we show steps at even positions missing, and the predicted pattern falls apart. In panel (d) we find extra steps, at half-frequencies, in the same device. The differences are minor – a change in the applied magnetic flux, applied frequency. The wind blew, and the clouds shifted, the rabbit no longer looks like anything.
My final example is the work of Seth Byard, my current student, who was measuring periodic conductance oscillations, that you can see as blue and red parallel lines in the figure below. If you cross once of the periods, from bottom left to upper right, you are adding one electron to the island. So the period of the pattern is e, the electronic charge. The dramatic signal in these data is the abrupt shifts along the dashed yellow lines. The shifts are by a fraction of the period, and this makes you think that we have detected fractional charges. Now, all elementary particles, except quarks, have integer charges. But in condensed matter physics we know of quasiparticles of fractional charge, in the fractional quantum Hall regime. In fact, shift just like these were reported by the Purdue group as ‘Direct observation of anyonic braiding” which would be a stepping stone to the topological quantum computing, especially if you repeat the experiment with Majorana particles.
In our case, however, these shifts are nothing more than jumps of electrons, with their integer charges, in a nearby quantum dot. We did not create that second dot on purpose, it was accidental. The mutual capacitance between the intentional and the unintentional dots is such that an added charge in one results in a 1/3 shift in the other.
So, these are my four “smoking gun” examples. If you read our paper, you will find references to other works where smoking gun-looking signals with fairly mundane explanations were uncovered. Taken in isolation, each of these works, while important, is rather detailed and digs into a particular experiment it tries to emulate. The editors at glossy journals, which published the original claims eagerly, get bored and don’t want to pay attention to these follow-ups which are essentially negative reproduction studies. Though there are exceptions.
Taken together, we are using our collection of fine-tuned data to make a larger point. That in a modern scientific paper, which is nothing but a text with a few pictures, smoking gun evidence should be taken with great caution.
A critic would say (and they have!) that by asking for increased scrutiny we shut down creativity and prevent discoveries that start as mere speculations. But I for one am not against smoking gun papers. I admire true smoking guns. The problem arises when this becomes the expectation every for paper of significance. That it should contain a smoking gun proof. Then the literature becomes saturated, and you no longer know which of the many claims is the real smoking gun.
There is fortunately an easy fix – we can fill the illustrated essay paper with more substance by sharing more data, and code. These materials can be used to check for how fine-tuned and representative the data really are. Is what we are looking at in this figure a real correlation, or a coincidence? This should be possible to establish with enough evidence to analyze. My group shares full data from all experiments on Zenodo, a public data sharing platform run by CERN, and we have done so for this new paper. There you can make sure that we truly did do the finetuning, and we haven’t accidentally discovered triplet superconductivity, Majorana modes, their fusion, or fractional charges.
Quantum offers an opportunity to leverage existing strengths and join a global trend
Every day people die in Ukraine from russian bombs. Soldiers, many of whom were in peaceful professions, are sacrificing lives to defend their land and their families. The focus of Ukrainian society is on the Victory for Ukraine, and they are slowly winning. While russia will not prevail, the war is still going to last for some time. Yet, amid the fighting and the uncertainties that it brings, many in Ukraine are already rebuilding. They are right: it is important to preserve and sustain the country now, so that the future growth does not start from scratch.
When I think about where I could help besides donating money to defenders, I arrive at the idea that I should seek collaborations with researchers in Ukraine. I haven’t lived there for 30+ years. But I am from Ukraine. My parents, grandparents and great grandparents were scientists and engineers in Ukraine. I have built my career in the West doing research in quantum physics, so I made it my goal to reach out to the members of the quantum research community in Ukraine.
Many academic institutions around the world responded to the russian invasion by offering positions to Ukrainians who were able and willing to leave the country. #ScienceForUkraine was a hashtag on Twitter that people used, several organizations popped up that aggregated PhD, postdoc, visiting researcher positions available to Ukrainians. It played a huge role: I know several people who would not have been in academia today without these opportunities extended to them.
But even more researchers remain in Ukraine, either by choice or circumstance. For instance, males of draft age are forbidden from crossing the border. A fraction are able to carry on their work despite the war. A lot of Ukrainians are displaced internally, they have moved to another part of Ukraine and are away from their places of work.
It is harder administratively for western academic institutions to support someone who is physically in another country, mostly because there was not much need to do this in the past and therefore the policies do not exist. However, several grant initiatives by professional societies and government agencies, as well as sponsor matching programs such as Universities for Ukraine have had success sending cash to researchers in Ukraine. These efforts should be expanded because there is an acute need to keep the existing projects afloat, before they are terminated and before people look for other ways to make a living, not by doing research.
When it comes to quantum science, Ukraine has a long history of excellence. Since the discipline was built during the course of the XXth century, a lot of what is known as Soviet, or even russian achievement was in fact done in Ukraine. Quantum physicists such as Bogoliubov, Shubnikov, Landau, Yanson, Omelyanchuk worked in Ukraine. Upon gaining independence in 1991, Ukraine did not prioritize science and technology. This is typical for many states that broke away from the USSR. As a result, Ukraine now has an academic system that retains many of the features that do not match with how the outside world is set up. For instance, universities generally give a high level of education and graduates are competitive when it comes to actual STEM skills. But many of the research labs and centers are structured the same way as pre-independence, because there was never enough investment made to reimagine them drastically.
There is a genuine thirst for reform in all dimensions of society in Ukraine. The country needs to pass many of them to join the EU, and people want them to simply live better. While a lot of the western support now goes towards defense, it is likely that as the russian invasion falls apart plans will be made to invest into an expansive domestic agenda, including in the technological research sector. So reform there is likely. Making quantum science and engineering one of the vectors of this effort is the right move for Ukraine for two principal reasons:
Ukraine has a great potential to contribute in this area, in terms of untapped talent, knowledge base and expertise.
Quantum is currently on the minds of policymakers and industry leaders around the world, so why not go where the world seems to be going?
Globally, governments roll out quantum initiatives and flagships to the tune of billions of dollars. This is motivated by both the threats that quantum information technologies pose to national security, as well as by the opportunities in computing, sensing, cryptography, materials, finance, and medicine. Quantum is a fairly inclusive concept that covers many fields. Industry is committing to it. Large companies such as Google, IBM, Amazon and Intel all have quantum projects. Major newspapers and magazines write about it regularly. Smaller startup companies number in the hundreds.
As with most cutting edge trends, there is absolutely no way of telling how many useful applications will this bring 10 or 20 years from now, but quantum is in fact where things are happening today. There can totally be Ukrainian quantum startups, and branches of large industrial projects in Ukraine. Ukraine is already part of a major EU initiative, Horizon Europe, which does include quantum. The United States should also think about how we can partner with Ukrainian assets towards the goals that our government sets for itself.
But it is researchers that are working in Ukraine now, or planning to come back in the foreseeable future, who will be responsible for the development of science in Ukraine. It is important to listen to them and try to understand what their vision might be. I asked several of them about what they need. Understandably, for the most part they have no time to think in vague broad strokes, be overly optimistic or even just waste energy on anything beyond their pressing needs. Some of the labs are actually damaged or destroyed by russian bombing. Others have no electricity. Even if everything is alright at the basic level, there are unaddressed concerns about salary cuts, about equipment that is broken or missing, supplies and chemicals that need to be procured. Some of their group members may be situated abroad and communication with them is complicated by time zones and inability to walk into the office and find them there.
One superficial issue is the question of branding. Ukrainians, perhaps due to their relatively weak integration into the global research process, do not call what they do ‘quantum science’. They use a variety of earlier names that are all now merging into ‘quantum’ in the West. But the actual research base is there! Check out this US-Ukraine quantum workshop taking place Aug 28 – Sept 1 online, you will see many presentations from Ukrainian universities. Several schools are setting up quantum curricula at the undergraduate and master’s levels. I personally am impressed with how much they are already doing, especially given the modest scale of their funding. I think a lot of it is pure personal motivation and optimism, great things to have and to nurture.
Of course if the investment increases, it will be possible to expand the scope of activities, across research, education as well as knowledge transfer to industry. Will there be a Ukrainian national quantum initiative, or quantum strategy? Will there be quantum centers and institutes? Will Ukraine choose to focus on hardware, networks, software, foundations or all of the above? In any case, it would have to be a major undertaking to get anything new going. For example, the country currently does not have a cleanroom for making quantum chips, it costs $15-30M to set one up. Who will be able to pitch in?
At this moment in time, with many eyes on Ukraine, it is possible to find partners and supporters, who are willing to spend time, energy and resources if they see that it can make a difference. Myself, I don’t know if we will ever build a quantum computer that lives up to its promise. But I think that quantum science and technology, being a legitimate global trend, can serve the needs of Ukraine well, and give it a chance to shine.
Let’s look into some basic retraction statistics in physics…
It’s been 2.5 years since Natureretracted “Quantized Majorana Conductance” by a Delft-Maryland team. Since then, my colleague Vincent Mourik and I have flagged three more papers from some of the same authors for retraction. Natureretracted one of these three 1.5 years ago, but the key authors of that second paper were fighting bitterly against retraction. Though the editors ultimately presented it as the authors’ decision. And the authors continue to fight against two other retractions we think should happen, at Nature Nanotechnology and Nature Communications. You can read our investigations here (or here) and here.
A Delft paper in Nature Communications remains without retraction, without an expression of concern, despite extensive data issues self-admitted by the key authors. Other authors were denied requests to be removed from the paper by the journal. https://twitter.com/spinespresso/status/1468870460100251650
Their close collaborators, old friends and various other insiders are also of the opinion that it is time to stop with retractions and investigations. We hear: “A lesson has been learned.” “Everyone got the point.” “Anyways, we know which papers are good and which ones not to trust.” “This is getting personal.” “Too much negative attention is bad for the field.” Each of these deserves a separate substack.
So indeed, there have been quite a number of retracted and questioned papers as of late in condensed matter physics. Perhaps it is already too many? Or, on the contrary, should there be more?
It is difficult to answer this question precisely, because of all the secrecy surrounding the publication process. We cannot know how many people have approached editors or university officials with confidential concerns that are squashed, and how many did not even try to do anything out of fear, or because of discouragement.
At the same time, a large fraction of us, if not everyone, who has been around for a bit, have read a paper that was obviously wrong, and not in a good way – meaning not through reasonable scientific disagreement. I think many people would agree that there are papers out there that they think should be retracted or corrected or at least commented on. I certainly hear stories like this all the time, people come to me with them and we share experiences.
While I cannot provide you with rigorous statistics, I can do second best, and use a trick that physicists are very proud of – an order of magnitude estimate. Let’s take Physical Review Letters as a first example, a well-recognized physics journal that physicists think of as a good journal. The first input should be the total number of papers they publish which went from 5000/year to 2000/year. Over the past 20 years this will be 80,000 papers (order of magnitude!). 20 years is not an accidental date. First, this is how long ago the last batch of physics retractions related to Bell Labs took place. And second, this is roughly when the journal PRL began concerning itself with impact (more on that later and in future substacks).
By what process do articles end up published in Physical Review Letters? Well, that is a fairly straightforward peer-review where the editor mostly tallies up the referee votes. Reviews are often short and not nuanced, the overwhelming majority of reviewers do not see and do not ask for additional data or materials. So there is bound to be some rate of errors in the process where unreliable, incorrect, manipulated, fabricated claims make it into the journal. What should we assign this rate? To me 1% seems reasonable and likely an underestimate, based on studies in scientific reliability from other fields. That would make 800 retractable papers over 20 years.
Maybe you would like to make an argument for physics exceptionalism – that physicists have a superhuman ability to detect BS. The chief editor of PRL is certainly of such opinion that his journal is just really good at selecting papers, checking for fraud and other issues prior to publication. Physicists can of course also be substantially more moral than other scientists or ordinary people. Perhaps they don’t commit fraud and don’t experience pressure to publish in high impact journals for which they would bend or alter their claims. Then you would choose an error rate of 0.1%, which is close to where the average rate or retractions across all of the literature, not specifically physics, seems to be. That is still around 100 papers over 20 years from PRL.
Let me guess: you think that you would certainly have heard of 1000 retractions, but 100 retractions is not that many over decades, just a few per year. Physics is big, and even if nothing happened in your field, there must have been something on other subjects. Right?..
Let’s now turn to the Retraction Watch Database, a rather complete catalog of all recent retractions. It lists just 16 retractions from Physical Review Letters since 2003. (It goes up to 16 with the newest one that was made.) So every year, PRL published several thousand papers, but retracted on average less than one! Perhaps retraction is just too harsh of a measure and instead the journal opts for corrections or comments? Those number in low tens, and there is evidence that PRL has been making it harder to submit and get comments published.
In my opinion, PRL retraction statistics are extreme. They are not evidence of how good the system works but of the opposite. Of how the journal suppresses quality control mechanisms by refusing to act on critical concerns, and through this undermines the scientific process. The retraction rate is between one and two orders of magnitude too low, and this sends a message that anything goes, so as long as you somehow make it through peer review you are all set and will face no consequences for any exaggerations, manipulations, fabrications or falsifications. And if you know of any, there is nothing you can do. Indeed, retractions like the one that just happened are of very high-profile works and they took enormous efforts to put through, made reluctantly under extreme community pressure and after initially dismissing concerns. Any smaller problem is viewed as not a big deal.
Perhaps the wrong papers are all the rejected papers, and peer review works so that there is no need for retractions? Except nothing prevents submission to any or all of about a dozen high impact journals in physics, of the same manuscript without any changes, until it finally gets in somewhere, at which point it is safe forever. On top of this, anyone who has actually been part of peer review, on either side of the process, would know that the degree of checking is often nominal, inconsistent at best. Without mechanisms of post-publication review, the process is missing a vital quality check.
The culture of retraction is either token or non-existent. For example, Nature Physics is a journal that unseated Physical Review Letters in the impact publishing sector. It is a boutique journal, meaning it does not publish very many papers per year. But in its 18 years of existence, it has not retracted a single paper! Nature Nanotechnology has made a total of 1 retraction in its history. Nature Communications has made approximately 20, but similarly to PRL it publishes thousands! You can check my numbers using the Retraction Watch Database, or try to scroll the journal websites and find retractions there.
But what is a product without quality control? A journal without a single retraction is a product that is 100% unreliable. It has no demonstrated mechanism of checking for and correcting errors. And this means that any and every single paper in that journal can be wrong. So why do we need to consume a product like this? Imagine it were a car – would you say – we don’t know which part works and which doesn’t but we are still okay to drive?
The unreasonably low retraction rate itself makes it harder to retract unreliable papers that have been clearly identified. Think about it: what does it mean for an editor to never have retracted a paper in their whole career? You can imagine that there is a very high psychological bar to imposing such a drastic sanction. What justifies this ultimate step, akin to criminal sentencing? Yet the editors at these journals are not only so reluctant to cooperate with investigations, or to help peer researchers obtain data, they also come up with ridiculous arguments for why reproduction works should not be published – for instance because they are lower in their impact than the original, unreliable, claims.
And perhaps this is why, after a couple of retractions, they hold a meeting and decide to push back on further retractions. After all, the point has been made. Lessons learned. Time to move on. Otherwise there will be damage to the field. Or, more importantly, to the journals. Yet, by resisting retractions, the high impact journals only hasten their demise. They exhibit an inability to adapt and embrace the evolving needs of researchers, such as of a first year graduate student who wants to know which paper they can trust. Or of an assistant professor whose field is poisoned by unrealistic claims from bigger groups. Or a postdoc trying to build up on a study that they read. Or of a general public member who read a sensation piece in a newspaper picked up from a glossy science journal.
Rapid reproduction efforts and open debate are a pleasure to follow
My next post can hardly be on any topic other than #LK99, the claimed room temperature superconductor. It uses the naming scheme of a virus, with the year of discovery in its name, and it does appear to have started a pandemic of sorts. I will reflect on what has happened so far, but my post is not about whether or not the claim of superconductivity is reliable. (A great source to check the current status of the effort is the LK99 Wikipedia page https://en.wikipedia.org/wiki/LK-99 ).
Instead, I invite you to marvel at the amazing scientific process happening in front of our eyes, the open debate, the instant reporting on both progress and failure. The thrill of it all and how we should remember this feeling while it lasts. What I would like to ask is that you close your eyes, and imagine that we do this as a matter of routine, within our community and without the bitcoin bros pumping us up. I think it would be amazing.
(Rene Magritte – The Castle of the Pyrenees, 1959, borrowed from
My first reaction was skeptical, same asfor manyphysicists (though not all!). I never veered far from this skepticism though at times new revelations left me wondering if this could be it after all. Don’t get me wrong, I want there to be room temperature superconductors. As a physicist who took 5 different courses on superconductivity, I certainly recognize this as one big dream we have. (We even justify building quantum computers because they may one day help us discover a room temperature superconductor!).
I am a verified longtime fan of superconductivity – here is me in graduate school almost hugging the plaque.
As an aside, this kind of a ‘big if true’ admission by a skeptic is a self-trap. You give up polemic ground simply by admitting to this. First, since it is potentially big, then it is certainly worthy of attention (an argument for getting it accepted into an impact factory like Nature). Second, if potentially big, does it matter if it is not rigorous? Surely the inconsistencies and unfilled blanks can all be figured out in due course! Now is not the time for all that, now is the time to deliver the big claim to the masses. This is why I don’t recommend, if you are still refereeing papers for journals, to ever put any admission of impact in your reports, especially if your real point is that the claim is not true.
The reason I dismissed the arXiv article when it first appeared was just because I have already been exposed to many poor quality reports in the prior years claiming room temperature superconductors. This looks to my eyes like another UFO sighting.
Other people also took issue with the data quality in the original preprint from the Q-Centre in Korea, which I agree looks sloppy. Though there is no law against having a functioning UFO in your barn for 20 years, and wanting very much to share the news, but only taking very blurry and zoomed out pictures of it as proof. I personally would not have taken that route and instead tried to get clearer data first. But as long as there is enough transparency, accurate description of findings and unimpeded replication attempts I am fine with any data.
The UFO analogy is very current as well. We just witnessed a big scandal on ‘ambient’ superconductivity from Rocherster. The problem there was in a sense opposite: that of a fairly polished and superficially comprehensive data set that convinced the key people, such as referees and editors. But the work was done in secrecy, and it was very difficult to get data out of the authors. According to Wikipedia, Nature rejected the LK99 paper, while in the same period of time they published Ranga Dias’s claims from Rocherster. We now know that Ranga Dias’s work is unreliable. It will be hilarious if it turns out that LK99 is real and Nature did not recognize it instead pushing hard for Rochester. Along with the New York Times, which have since joined the LK99 train.
In fact, the ‘superconductor of the summer’ provided an amazing comeback opportunity for Kenneth Chang of the New York Times, who was just a week prior not able to let go of Ranga Dias and his made up reports. I wrote a separate substack about that. Sweet work by the NYT titling department on this one too. If it is the ‘superconductor of the summer’ then I understand why it is low pressure, but at the same time why is it not chill? (Sorry for the quality of jokes, this is Substack not Twitter where all the top jokes still live, see below.)
Open Debate
While the immediate attention is obviously on what is going to happen with multiple verification efforts, and whether or not the dream of room temperature superconductivity comes true, I want to promote something absolutely beautiful that is happening in our solid state physics community, and how this in itself could lead to many more wonderful and ultimately real discoveries if we keep going this way.
All of a sudden, numerous excellent experts on various aspects of materials science, chemistry and physics are talking publicly and openly, without the usual awkwardness and inhibition that defines scientific communication. The scientists of your imagination are probably constantly at each other’s’ throats arguing about the TRUTH, giving each other advice and engaging deeply with each others’ work. Nothing of the sort! Scientists are barely communicating. They get together often, they appear to listen, you get a comment or two on your hour-long presentation, it is usually something neutral or artificially positive.
If anyone has anything critical to say, they rather keep it to themselves. The reason for all this is the secretive business-like model of publishing and funding that favors working in your corner for small personal gain. This is why, for instance, our criticism of various works on Majorana was met with shock and disapproval by the people closely involved – “how dare they talk openly about the problems and disturb the calm and security of doing business”.
So why, in the case of LK99, did multiple people, trained in artificial yet powerful restraint, abandon caution and start speaking out? What made this possible is a rare coincidence of factors:
Apart from the superconducting temperature itself, this is all physics with a long history echoing back decades, so there’s a large pool of people with something to say.
It looks like the synthesis is simple, and the measurements do not require low temperatures or high pressures. Many replications appeared instantly, giving more fuel to the debate with more data to analyze.
Huge attention, most dramatically on Twitter which in itself warrants, even if you are skeptical, to put in your 5 cents while there is an audience there for it, even if it is bitcoin bros.
Check out this dude who shows the full spectrum of emotions on LK99.
Today might have seen the biggest physics discovery of my lifetime. I don't think people fully grasp the implications of an ambient temperature / pressure superconductor. Here's how it could totally change our lives.
By the way, shout out from a coffee fan here, because he is working at a startup making frozen coffee. Hear me out bruh – room temperature coffee!! No cryogens needed anymore.
A mysterious social dynamics phenomenon to me was why multiple groups and individuals decided to engage in the reproduction of this particular claim on LK99. My conclusion is that these are mostly not career superconductivity physicists, but chemists and other specialists. They were not exposed to the onslaught of irreproducible claims in this field, and did not have time to grow bitter. They took the Q-Centre results more or less at face value and genuinely wanted to obtain a room temperature superconductor for themselves. There are historical precedents to this being an effective breakthrough method. For instance, Leon Cooper, who explained superconductivity, came to solid state physics from particle physics.
Here are some of the interactions between the scientists that all of this precipitated, and which I hope could become part of our routine process.
I want to highlight how expert physicists joined the discussion of these efforts online, with real-time suggestions, opinions on the results, even lab safety advice! The live experiments were not done in vacuum, people were paying attention to them, and engaging with this! The original data were also scrutinized including some initial data forensics, looking for unusual aspects in graphs. The usual physics clowns also came out of their sheds, but a lot of the discussion was very interesting and educational.
What I like is that people are not afraid to criticize each other publicly, so you can have a full spectrum discussion, not skewed by trying to avoid anything remotely confrontational. Yet it chugs along without major conflicts so far. Part of the reason is perhaps that the authors of the original experiments and of many of the replications are on different online platforms from the commenters, being from different countries. I am sure they are following the discussion but not reacting in real time to it.
Among concerns, the most common is about partial levitation which can be explained by diamagnetism or very plausibly ferromagnetism rather than by superconductivity. Another one dear to my heart is about measurements of resistance. People compare the resistivity of LK99 which should be zero if a full superconductor, to that of copper, which is a low-resistance metal but is not a superconductor, copper wins.
A bitcoin bro trying to understand what this all means, extracting value for the two datapoints where LK99 seems to be beating copper:
A huge splash was made when Sinead Griffin posted her calculations on arXiv. The mic drop meme and the national lab credentials plus results that seem to not contradict superconductivity in some form were met with absolute ecstasy by the internet.
While Griffin posted her finished paper on arXiv, when it comes to phonons, vibrations of the lattice that reveal a lot about the viability of a material, we could witness the making of a result in real time. First Florian Knoop posted his initial calculations which hinted that the structure of LK99 is unstable. He then posted more careful results where by his own characterization the material did not look too bad! (meaning only that the reported crystal structure could potentially be viable).
Last but not least, you got treated to several excellent retrospective threads on superconductivity in general, on phenomena that surround it and on searching for higher and higher temperature superconductors.
I am seeing a lot of newcomers lately to the room-temperature superconductor rodeo.
They might not be aware of the long history of these events, and I think there’s some cross-cultural communications difficulties going on because of that.
1/
— Prof. Michael S Fuhrer (@MichaelSFuhrer) August 2, 2023
Normally, I post on Mastodon these days, but I've noticed some recent trends here regarding the recent room-temperature superconductor (RTSC) proposals that seem concerning, so I thought I'd offer my two cents. (1/19)
1/8 A thread on magnetic susceptibility data in the papers of the moment. Magnetic susceptibility tells us about how much magnetization M (mag moment per volume) a material develops when placed in a magnetic field H. chi = dM/dH. Annoyingly, cgs and SI units are different.
Superconductivity captures the imagination of the public and scientists alike. The direct technological impact of superconductors has been middling. The impact on basic science, which then informs a wide array of technologies, has been tremendous and arguably singular. 🧵
Superconductor or not, LK99 came out of left field. But one can say the same about the last three discoveries of high temperature superconductivity. Prior to the 1980s, superconductivity was mostly found in metals and intermetallic compounds, with Tc maxing out around 20K.
Regardless of whether the claim of LK99 turns out to be correct or not, there is enormous value in this experience of working openly and together on something, even if it is just following somebody else’s progress on twitter. In the future, when not so many people are paying attention, there are still huge benefits to talking openly, and even in sharing preliminary half-baked results as long as it is clear what are the circumstances and what approximations were made. This way we get quicker to the truth, answer questions more fully, avoid many mistakes and eventually this is how one of us will give somebody else the billion dollar idea that they would never have thought of. What we may need to give up is a part of our ego, for the joy of working together.
What prevents us from doing this again? No, there will not be a sensational claim made every week, but we can apply this way of working, of doing science, to any problem, big and small, specialized and of broad interest. It has been my dream for a long time to livecast our experiments, nothing prevents me from doing this except the usual – priorities, laziness, attention span… But I think this could be the ultimate way of working, fully in the open, not through secretive and outdated journals, and not even through arXiv but directly peer-to-peer in a public forum.
At least one person inexplicably disagrees that this was overall great and laments the closed door peer review (fortunately there are also people on my side). C’mon, if you need to think about something you can go to your office and write up a note, put it in a manuscript, or a book chapter. That is also fine, but this cannot be the only accepted way of doing science – the way it is overwhelmingly done now. Given all the technology that allows rapid exchange of massive data we are artificially limiting ourselves to the XXth century habits.
SUPER MEMES
Finally, it is an absolute treat and honor that the internet meme machine, still resident on Twitter, has graced superconductivity with its gaze. People came up with really excellent ones. I mean it.
What a great moment I chose to leave Twitter! Right when you can gain 10k followers for a thread of hot takes on superconductivity… But I am not going back there! I want to explore the longer and quieter form for a while. And to prove that you can also joke on Substack here is one from me:
If LK99 proves real, I hope the government finally realizes that each lab needs to have a dilution refrigerator! 🤡🤡🤡
Poor LaH10. Independently confirmed to hit 275-280 Kelvin after jacking up the DAC pressure to an eye-watering ~200GPa. But no one cares because 40 ºF is a little too chilly for them. Get a sweater!
The New York Times reporting on Ranga Dias is emblematic of failed science journalism
Folks, many of you are aware of a major research integrity situation that is developing at the University of Rochester around the claims by Ranga Dias, of room temperature superconductivity. Ranga Dias absolutely observed no such thing. His papers contain unexplainable data artifacts, his reactions are non-transparent in response to data requests and questions about his papers. One of his papers has already been retracted from Nature, and another one is going to be retracted from Physical Review Letters. It is unlikely that this is the end of it, very likely more papers will be found unreliable as it often goes.
With Ranga Dias’s claims, Ken has been doing a very poor job. After writing about “room temperature superconductivity” enthusiastically, even visiting the lab of Dias to see with his own eyes the amazing work they do, now, after a second retraction has been announced, Ken came out with a new article with a very unfortunate title…
“A Looming Retraction Casts a Shadow Over a Field of Physics”
Now, it is widely known that journalists do not control the titles of their pieces. But the title is symptomatic of Ken’s approach to this reporting, and of broader issues in science journalism and of how it interacts with practicing scientists.
First, what the title directly means is that a field of physics, presumably the study of superconductivity or perhaps the entire wide area known as condensed matter or solid state physics, has suffered a significant reputational hit, because of this newly announced retraction, a second one and not of the most important paper of Dias. (It indirectly implies that the first retraction, arguably a lot more impactful, has done no such harm to physics.)
I am on record as a critic of the research methods used in this field, and I do believe that this topic, as well as the entire scientific enterprise, have major problems related specifically to the reliability and verifiability of major claims. But I have to say that in the case of Dias’s work, the community has done a great job being skeptical, or openly critical and proactively so, for a long time. Physicists, by and large, have not been fooled by this. Take, for instance, this blog, where hundreds of comments from many physicists are taking apart the second Nature paper of Dias, the one the journal published shortly after the first retraction. And the one that Ken Chang of the New York Times, promoted from the pages of the paper when he went to the lab in Rochester and saw “_something_”…
Of course, a journal like Nature can always find a couple of referees that will pass any paper into print, even Dias himself testified that Nature had no concerns about his data. This is a problem of poor scientific culture at Nature, and this is not unique to Nature but is emblematic of many scientific magazines. But the community saw through Dias’s claims early on, and he has also been criticized at numerous conferences and workshops that took place away from the eyes of the press and the internet.
So what does Ken Chang do? After writing not one, not two, but three (or four?) times about this one group’s dubious research, he concludes that now a shadow has been cast on an entire branch of physics. Whereas, in reality, Ken just can’t let go of this story and when finally backed into a corner and forced to face reality with a second retraction, he pivots to blaming the physics itself. In fact, physics has done a great job on this particular case of misconduct. Physical Review Letters has been forced by external pressure from physicists, including at the Virtual Science Forum on reproducibility in condensed matter physics, to start an investigation and found no other way but to retract this paper. In divergence with their usual practice of resisting any efforts to reproduce their publications.
The unfortunate reality is, any physicist, myself included, would be thrilled to end up mentioned in the New York Times. This could be career-boosting, lead to grants, promotions, invitations to speak at nice places, interest from prospective students. In this sense, people like Ken are kingmakers. Some scientists believe that getting into the New York Times is about achieving something remarkable, like a big real discovery or insight, accumulating so much valuable expertise that you are asked to comment on global events or trends.
In reality, ending up in the New York Times is about a random journalist who happens to work there coming across your item, and without having much expertise in your field, making a decision to feature you in their work. As the example of Ken Chang’s persistent reporting on Ranga Dias’s false discovery demonstrates, there is no meaning or value to it. It is of course similar to getting into Nature, all you need to do is present the right combination of material to the editor and depending on what the editor feels like on that day you could be in.
But does it have to be this way? Surely scientific journalism can play and does in some cases play a major role in explaining, promoting and improving science. Examples of reporting just on the Dias’s story prove this (the New York Times not being one of them, but even Nature did a decent job – though they did not call it a ‘disturbing picture’ when two Delft Majorana papers were retracted from their own journal). As scientists, one thing we can do is to keep journalists in check where they go wrong. We should do the same with scientific editors, they are just people and not demi-gods that make or break careers.
The New York Times should admit their role in promoting charlatans from Rochester, and apologize to their readership. Kenneth Chang should rethink his approach to how he selects and researches his stories. After all, he sits on top of the science journalism pyramid and other science journalists likely take cues from him.
At long last, the ‘Chiral Majorana’ paper is retracted from Science. It took 4 years. I was not on the team led by Laurens Molenkamp that identified the multitude of problems with it. But I have seen the evidence uncovered by the original investigators, and I repeated and confirmed many of the steps. I will not share at this time what the evidence is, hoping that the investigators will do so themselves. It is important for everyone to know what they found. Because it is brilliant work. And because we can all learn from them how to look at data with our eyes open, and how not to give into the lull of fairy tales.
‘Chiral Particle Fallen Angel’ courtesy of Dall-E Mini
I can attest that the evidence is overwhelming and goes to the basic level of whether the data ever existed? It didn’t. The entire paper is made up. The most positive thing I can say is that some of the figures were inspired by real data. But this is actually in itself a horrible thing – by imitating the overall signal shape from previous works, they made it more believable to some.
The data in the paper look remarkably similar to the theory predictions of the late Shoucheng Zhang. In fact, Zhang himself joined the paper as a co-author, which indicates he was impressed. He took on the task of promoting this work, referring to it as ‘The Angel Particle’, in analogy to the Higgs boson, known as ‘The God Particle’. Because we know that the similarity of the data to his theory was a result of data fabrication, we can say for sure that his predictions have not, in fact, been realized.
Many others were not impressed from the start. When the paper first appeared on arXiv, my own reaction, after looking at it for ten minutes, was ‘No Way.’. Others, such as Jay Sau, Xiao-Gang Wen, and their collaborators put forward simple Kirchhoff-level arguments why the claim cannot be right. This is one common folly of people who try to fabricate or falsify results – if they imitate physics that is itself wrong, they are likely to be scrutinized and found out. This was the same for the Delft ‘Quantized Majorana Conductance’ where my friend and collaborator Vincent Mourik and I precipitated the retraction. Despite that paper being accompanied by its own theory, from the University of Maryland, which we also could not verify, the physics behind it was simply wrong. Majorana is not expected to be quantized in nanowire devices, surely not in the short and imperfect ones that the Delft group had (See this paper, Figure 5 and discussion).
Nevermind this pushback, Kang Wang, the last author of the freshly retracted paper, received praise and accolades for his work. For example, he won the Neel Medal literally for the discovery of Chiral Majorana, a discovery that was entirely fabricated. Interestingly, despite Science knowing all the facts of the case, its publisher AAAS kept Kang Wang’s appointment as an editor at Science Advances, a sister journal to Science. I was shocked when I received a referee request signed by him. How can AAAS allow a person who had published a fabricated paper in Science make decisions on what to publish, for the same organization?
Setting aside Kang Wang, it is disappointing that so many other co-authors also refused to retract this obviously fabricated paper. The three co-authors that agreed with the retraction demonstrated integrity and bravery. Of the authors who refused to sign, I am particularly perplexed by Jing Xia of UC Irvine. His involvement in the paper was actually fairly peripheral. His group offered Kang Wang the use of a dilution fridge and helped set up the measurement. As I understand it, the UCLA authors quickly submitted results for publication, and at the time other authors trusted them implicitly.
But given that the raw data supporting the claim of chiral Majorana were supposedly taken in Xia’s lab, it should have only taken him a few hours of looking into the matter, to figure out that the claims he had signed-off on, were fabricated. Four years later, he still does not have enough courage or conviction to remove his name. Extremely disappointing.
All that said, the behavior of individual authors pales by comparison with the absolutely shameful handling of the matter by UCLA and UC Irvine. When first approached, and shown damning evidence of data fabrication, the deans at both schools, both physicists, quickly proclaimed that they had investigated, and found no problems. Editors at Science then decided that their hands were tied. They could not fathom taking the initiative to perform an editorial retraction, i.e., one made without the approval of all authors or the institution. Four years later, an editorial retraction is now exactly what they have done.
The editorial decision, and the lack of mention of any institutional investigation in the retraction notice, suggest that UCLA has not made progress with its investigation. It is appalling that the University of California, the largest state university, has utterly failed to assure the accuracy and integrity of their research record. They should apologize to the public, and compensate the government for funds and efforts that were wasted because of their failure.
There is some degree of justice in Science’s decision. These researchers spent time and money trying to repeat Kang Wang’s claims. Perhaps they believed in them and wanted to have their own Chiral Majoranas. Or they may have felt a duty to clear up this topic.
On reproduction, Nature is not taking as fair an approach as Science. The editors only permit reproduction studies in lesser journals, a step or two down their vast pyramidal scheme. For example, Quantized Majorana Conductance was published, and then retracted, from Nature – but our reproduction study of it was only offered a spot in Nature Physics. The first missing Shapiro step paper was published in Nature Physics by a group from Purdue, but a negative reproduction study from NYU was published in Nature Communications. There are other examples. This is like a newspaper publishing corrections to a front page article on the back page. On the other hand, Nature does better than Science on their retraction process, having already retracted two Majorana papers (1 and 2), and one on room-temperature superconductivity. All of them still took too long, with most of the time taken by the editors just sitting on it.
Back to Kang Wang. He has written a blog in Chinese, in which he basically argues that his samples are ‘better’ or at least ‘different’ to those at PSU. This is one of the commonly used arguments that appears sensible to people accused of unreliable research. They either try to argue that they or their experiments are ‘better’, known as the ‘virtuoso defense’ or they claim that their experiments were done differently – usually they amplify minor differences that play no role in conclusions. It is often easy to see through this.
Another step that the accused take is they often claim that they have since followed-up and confirmed their own findings that were found unreliable. Kang Wang did this – he put a paper on arXiv containing ‘even better’ conductance quantization than what was fabricated for Science on Chiral Majorana. As far as I know that paper has not been published.
The Delft first author Hao Zhang had more success with this method. He submitted to Physical Review Letters a paper with ‘even better’ quantization of his ‘Majorana’ ‘plateau’. Editors at PRL received critical feedback, but chose to publish the paper, knowing fully well that the physics behind it is not valid and that the same author had to retract the same claim from Nature.
Nature played a role in making it possible. They published a misleading retraction notice on ‘Quantized Majorana Conductance’. In it, they allowed the authors to say that the problems were limited to ‘signal miscalibration’ and they omitted acknowledgement of inappropriate data selection. (Another problem with retractions initiated by authors, is that authors can choose the wording to their advantage). Of course, if the only problem were calibration, an inadvertent error, then another paper with ‘calibration done properly’ could be valid. Still, the PRL editors actually knew that the real problem was non-representative data selection. The value of conductance was not quantized. The authors simply selected from values both greater and smaller than the ‘special’ value.
Since Physical Review Letters has agreed to publish their ‘quantized within 5%’ paper, I showed the PRL editors data from my group containing the same features as in Hao Zhang’s PRL, but where the quantization is half of what naive theory says it should be. These are half-quantized plateaus like those in Kang Wang’s paper, but in a Majorana nanowire. I find it plausible that I could spin a story that this is a physics breakthrough and persuade editors at PRL to send it for review. Perhaps this is the real chiral Majorana? In our experiments we can find any value of conductance, and fit it to any theory – this fact is well-established by now.
The UCLA Science paper is not the only paper by the same key authors that should have been investigated. For example, editors at Physical Review Letters have also seen evidence that another paper by Qing Lin He and others from Kang Wang’s group contains manipulated data. They have shrugged it off. I am a lifetime member of APS and I am extremely dismayed by this. I did renew my AAAS membership yesterday, because with this editorial retraction, it is clear that at Science, published by AAAS, some editors are trying to do the right thing, even if they act with delays that cannot be justified.
The Editor-in-Chief of Science recently published an editorial in which he proposed changes to the process for handling challenged results. He wants to separate the question of scientific validity from the question of research integrity. Oftentimes, the journal simply wants to know if the paper is accurate, but the university investigates whether misconduct has occurred which takes a long time and holds back retractions of unreliable results. I agree with the editors’ proposal, because it can allow them to move ahead quickly with retractions as soon as they establish results are invalid. Any evidence of misconduct should still be pursued, and retraction should not be the only goal.
Editors at journals can and should still do a lot more. Demand full data before acceptance, retract papers where the authors refuse to provide data, send paper for independent post publication review if concerns are raised, the journal should not depend on universities which have an obvious conflict of interest. The editors have a system for reviewing and publishing. They also need a system for investigating.
Some of these measures could help catch unreliable work before it is officially published, while others will aid in correcting the record more efficiently. In the meantime, we should brace ourselves for more revelations and retractions, on this topic and many others, across all disciplines and fields of science.
One of the hardest, and effort-consuming parts of doing science is writing papers. Why is that? In part, because it genuinely does take effort to think through the arguments and lay out your work in a clear and logical way.
But in other large part, it is because of how the system is set up and what expectations (format) it generated for a scientific paper. The problem is, it became a long essay that simultaneously caters to audiences with vastly different expertise and interests. In your field, out of field, general audience, novices and experts, professors and students, etc.
I came up with a way to streamline this process and the idea is simple and intuitive – write in very short blocks of text, without worrying about the rest. Also, don’t think of it as a linear text, an essay. It is really just a collection of blocks that you can toss around, skip – or add, as you go. Rest assured, most readers do the same – they skip large parts of your paper and look for specific information only!
I have been working on this for a while, and got it to the stage of a working paper, which I think is useful for students and anyone who is writing research papers. The working paper stage is also where I can solicit feedback, expand the initial library of blocks etc.
I may eventually publish it in a more traditional sense, but really – I hope it is already useful! In my group we have been using it for a year. And while writing (and finishing) papers is still hard, I think this method brings order to the chaos of this process and helps move things along. As you only need to write 1-2 blocks per day to finish the whole draft in 1-2 weeks!
We are looking for several postdocs to work in a collaborative cluster at the University of Pittsburgh and Carnegie Mellon University in Pittsburgh. Positions are available in my group, and in the groups of Michael Hatridge (Pitt) and Benjamin Hunt (CMU). The range of projects is impressive – from spin liquids to scalable qubits designs and from nanowires to van der Waals heterostructures.
See this page for more information and for how to apply:
So you are in recent past a glorious leader of a mighty experimental lab, like myself. And then you find yourself without a lab, without students to boss around – because your lab is shutdown by a virus. What do you do? You are still the same person, hungry for power and new discoveries delivered to you daily on a platter. A combo platter… From that place on campus that is closed… Ah, damn!
Anyways, now my only lab is a bunch of lego’s and my only student is a five-year old. What can you do with legos? Oh, tons of things. For starters, you can cool them down in your dilution refrigerator! Except… Oh well.
Second best? You can build a Kapitza pendulum. What is it you may ask? It is the kind of pendulum that you shake and it stands up, seemingly defying gravity. According to Wikipedia, it is magic. It takes a five year old about 15 minutes to build, but they need a little help getting gears inside. And it gives about 3 minutes of uninterrupted joy. All in all, about 20 minutes passed, not bad! (You can enjoy this at any age and possibly for much longer)
All the credit goes to Matt A. Robertson, who designed the device and created these amazing lego instructions while working at Texas A&M with Artem Abanov. Making this project brought memories of the old days when you could just hop on a plane and visit awesome people at another campus, talk physics with them for a whole day… And they would show you their lego’s.
Technical notes for if you build this. 1) There is a link that you can use to purchase all blocks you need for this, but it does not include one – item 2730. 2) The part that sticks out to the side is a handle to hold the device. The assembly is designed for a left handed person but it can be mirrored.
It is a busy time. Maybe you are exhausted from zoom-partying all night. Or maybe you have kids. Maybe you are a theorist and you did not notice anything strange. Or perhaps your lab is shut down, like mine is, and you are trying to stay productive.
So try this: think if you have results in your notebooks that you would normally not publish. Not because they are low quality or incomplete – though you may want to start looking at those if the shutdown stretches for months and you get hungry.
I am talking about negative results. In physics, we are wired to only share the positive, the breakthroughs! We get a mental Pavlov dog zap when we think about results that are anything but the rosiest of achievements. In other words, we think that physics is an Instagram account.
In fact, negative results advance physics. They are very important. They save people time, they complete our understanding, they help us find the correct path forward. This is obvious, right? So it should not be scary to share them, and it should not be considered a waste of time. Do it now when your lab is closed, but also do it when it reopens.
Share your negative results. You already put effort in doing those experiments or those calculations. You thought about it and discussed them with your colleagues. Others also deserve to know. And you will get credit for this. They do get noticed. The next generation will learn from your findings, and they will do better science than we could do.
Two years ago I had a fantastic experience. It was a 4-day essay film course offered by the Derek Jarman Lab. Jarman Lab is connected to the University of Pittsburgh through its founder Professor Colin Maccabe. They came from London to teach us how to use film as means of communicating research. Everything had to be finished in just four days which also included some lecture time about the essay genre and documentary filmmaking. Through this course I learned a lot, I met amazing people and had a great time.
I chose to make a film about our research process. It is based on reality but it does not correspond to real events. One thing I learned about documentaries is that the director or producer has enormous power to steer or create narrative. This is very different from scientific papers where facts are supposed to be front and center.
I always planned to go back an re-cut my project, tune the sound levels and fix glitches. But I realized that Jarman Lab already have it on their website. So why not share it here?
As we were wrapping up, I had no time to add credits. But they are so due! Thanks to my actors Arash Mahboobin, Kelsey Cameron, Lily Ford, Bomin Zhang.
Special effects are by Bartek Dziadosz. Him and Lily were our teachers. They are terrific!
Music in the film is by Devon Tipp, who composed it while thinking about Majorana research. This music project deserves a separate blog post!
‘T was the night before Christmas, and a Hannukah night. At the same time as Kwanzaa also happened have might…
Gather round, children, for a tale of a little boy – not five years has he spent on this Earth in grad school. This little boy has been very good, he worked the hardest he could – and by year’s end he has submitted his very first paper to arxiv!
Feeling good, feeling proud – candles lit, blessings said – He drank milk, brushed his teeth and was headed to bed.
At this moment of peace and calm – an email arrived! It was from a world famous scientist. The scientist wrote that he read the boy’s paper! “With interest”! Oh! What a wonderful honor – thought the boy – for such a genius, who famously lived in a very tall tower of pure ivory – to have even glanced at my paper!
The boy continued reading:
“I worked hundreds of years in the same field as you. And the papers I’ve written are a million and two. But of them, dear Sir, to my utmost dismight, Every one, except twenty, you have failed to cite!”
The boy felt terrible. Has he been naughty? Was his h-index two sizes too small? Will he not get presents? Did they even get presents in their religion? He could not sleep. He started adding all the missing citations to his manuscript.
“I should cite all the others who have worked very hard!” And his small paper grew… Soon he needed a cart…
But then a magical fairy appeared. “Don’t be sad, I have a spell just for you”. And the fairy hacked into the webcam of famous scientist’s laptop. The scientist was typing and typing frantically. He was sending emails to all the people who posted their papers on arxiv that day.
“You see my dear friend” the fairy told “citations are not a form of respect, and they are not for giving credit. They are a tool to help understand the paper better. Excessive citing can make little readers confused, because they wouldn’t know which papers they should look up.”
The boy thought long and hard. He decided that the famous scientist is probably very lonely in his tall ivory tower. And the boy invited him to give a talk at their monthly graduate student seminar. The scientist agreed because there were cookies, hot chocolate and marshmallows.
Much has been said about the utmost vital need to be highly ranked. All sorts of important people, from prospective students to governments, take note of the one number to which your program is distilled by a prestigious, serious and rigorous ranking agency. At Pitt we did not enjoy particularly high rankings so far, which made us very-very sad.
Until today when we got a shot of awesome news – not only are we ranked high, but we are NUMBER ONE, in the WORLD (In the Universe most likely for that matter).
University of Pittsburgh Physics and Astronomy Department was ranked first in the world for the quality of coffee from a department coffee machine.
To the skeptics out there who might be wondering – how has this ranking been established? The same way as the US News report ranks graduate schools – we asked a few people. The US News survey department chairs in your discipline (let’s say physics) from US universities “name top physics/condensed matter/particle/astro graduate programs”. Some of them, maybe 30%, reply, and they do some math (likely addition and division) with those numbers, then they publish them for everyone to contemplate and make their life choices. So the ranking of graduate schools is based on the opinions of a few people who likely never been to most schools, haven’t seen their labs, haven’t read their papers, didn’t talk to their faculty and students…
Inspired by this great system, we did the same. We asked. We did not have time to ask all department chairs in the country, since we only had the idea this morning, so we just asked ourselves. But we’ve been to MANY departments. And we drank their coffee. And we are definitely number one, in physics, in this category.
(If you include other departments, e.g. chemistry, we would probably have to yield to the University of Oregon, though we are still to visit. If you include regular coffee shops, some of which are in the Physics buildings, we will probably also go down in ranking, quite far. If you ask department chairs to rank coffee machines, and then average their replies, you will get Harvard, Berkeley, MIT, Caltech etc.)
Electrical and Computer Engineering (ECE) Department at Pitt has just announced an assistant professor search in the area of nanoscale electronics & photonics with emphasis on quantum computing.
ECE is across the street from Physics, and while this position will be the first in the quantum field for Pitt engineers, a successful candidate will be working next to a thriving cluster of quantum physics research at Pitt, as well as at nearby Carnegie Mellon University. With two state-of-the-art cleanroom facilities, supercomputers, and hopefully several more subsequent hires in quantum computing across the two campuses.
Applications are due by Jan. 7, 2019, although candidates will continue to be considered until positions are filled. Please submit a CV, research and teaching statements, and contact information for at least three references, all in a single PDF file, to ecesearch-TS@pitt.edu.
You must be logged in to post a comment.