Banner
    Networks Are Killing Science
    By Michael White | August 22nd 2009 02:38 PM | 36 comments | Print | E-mail | Track Comments
    About Michael

    Welcome to Adaptive Complexity, where I write about genomics, systems biology, evolution, and the connection between science and literature,

    ...

    View Michael's Profile

    Here's a little exercise in scientific thinking. What's wrong the approach to science described in the following passage? (This passage, about applying network analysis to counterterrorism, is taken from the complex systems special feature in the July 24th issue of Science.)

    McCulloh and Carley used metanetwork analysis to analyze 1500 videos made by insurgents in Iraq. "The insurgents would videotape most of their attacks as propaganda," says McCulloh. "As of March 2006, we had something like almost three out of every four U.S. deaths [on tape]." Carley extracted data from these videos, he says, "made a big network out of it, and ran a fragmentation algorithm which clustered them into little groups. And when you go back and look at the videos in those groups, you see forensic clues that identify who some of the insurgent cells were." The details extracted from the videos are classified, "because we worry that the insurgents will learn what we're using," McCulloh says. He and Carley worked with the U.S. military to "operationalize" the technique in Iraq. U.S. commanders there are faced with too much information and too little time to act on it. McCulloh says that Carley's metanetwork software helps them find clues and patterns—boosting the chances of catching or killing insurgents.

    McCulloh claims that the technique has yielded dramatic results. "Sniper activity in Iraq is down by 70%," he says, and he's confident that IED deaths also dropped because of the insights provided by Carley's programs, although he can't cite data. "It's a simple application of metanetwork analysis," he says.

    But Sageman is skeptical that military progress in Iraq can be chalked up to network analysis. "I'm not convinced [metanetworks] have helped at all," he says. "An easier explanation [for the drop in sniper attacks] might be the tribal uprising" against the insurgency in Iraq. "There's no way to know, and that's a big problem with this field in general." Carley counters that Sageman "doesn't understand the methods."


    This passage captures what is wrong with much of econophysics, systems biology, sociophysics, and almost any field that been tackled by heavily computational complex systems approaches. Many of these researchers don't understand what it means to test a theory. They build these complex models, which involves making important assumptions that could easily be wrong, and then if their models fit existing data, they think the model is right.

    Hence you get this McColloh guy claiming that his network analysis model was responsible for a big drop in sniper attacks, ignoring the much more obvious and plausible causes for the drop in violence: the addition of 30,000 troops and the US Military's major new approach to counterinsurgency implemented by Petraeus. The network researchers can't justify ruling out the more obvious explanation; their only retort is to say that their critics don't understand their fancy methods. (Which is not true in many cases - there are plenty physicists, biologists, and economists who understand the mathematical/statistical/computational techniques, who are bothered by the scientific culture of complex systems research.)

    This a dangerous mindset to have in science. What these researchers are doing is practicing a sham form of science called by Feynman Cargo Cult Science:

    There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.


    And no, that does not mean simply training your model on half of your data set and showing that you can effectively explain the other half of your data. You need to be proactive about probing your model for problems. So your model explains one situation well, now go find another very different situation and see how well you can explain that. Look for predictions made by your model that have not been noted in the real-world system before, and see if your prediction is borne out.

    Another example from the same issue of Science:

    Complex-systems experts have also made contributions in epidemiology. In 2001, Vespignani and colleagues showed that in certain types of highly connected networks called "scale-free," it's impossible to stop the spread of an epidemic no matter how many people are inoculated. Conversely, in 2003, Shlomo Havlin, a physicist at Bar-Ilan University in Ramat Gan, Israel, and colleagues found a simple strategy for inoculating against a disease that beats picking random individuals. By going a step further and picking randomly chosen friends of those individuals, health officials can, on average, inoculate people with more social ties through which to spread the disease.


    Nowhere in the article do you read about any real-world tests of this model. And you don't see any real world tests in the actual research paper either. But simply having an untested model is a 'contribution in epidemiology' apparently worth writing about in a news feature.

    At the heart of scientific thinking has to be a strong desire not to fool yourself, coupled by an understanding of how to actually put that desire into practice. Complex systems are an important and relevant topic, but they've been so difficult to tackle because they are messy and hard to study. It's difficult to find the right simplifying assumptions, and to make sure that you've considered all of the important factors that go into the behavior of the system. It's so easy to be wrong.

    And so it's sad to see this emerging scientific culture that bizarrely believes that if you can produce a model that fits the data that inspired you to build the model, you've actually shown that your model accurately captures the system. This culture floods the scientific literature with zero-impact papers, dazzles the computationally naïve, captures a lot of air time in the news.

    Here's a prediction of my own, one that I'm willing to put to the test: if complex systems researchers don't get serious about the scientific method, their field is going to fizzle out, if not crash and burn. Because in the end you have to move the field forward. The computer models can be dazzling, but unless they produce a demonstrated string of successes that end up changing the way everyone in the field thinks - the molecular biologists, the sociologists, the economists, then the sciences of complexity will be dismissed as unfruitful. In the end, your model has to inspire a someone to pick up a pipette and design an experiment.

    Comments

    Gerhard Adam
    Good article Mike.  This is the same type of thing that I run into with systems performance models and simulators.  They look quite impressive, but they are simply animations until they are callibrated against the real world.  This is often significantly harder than it may first seem, and in some cases is so difficult that the model itself becomes a distraction.

    I suspect that you're correct in that computers have rendered difficult problems much easier to program and thereby create the illusion of a solution.  It's not much different than the problems I've encountered whereby some people that run Excel suddenly fancy themselves to be mathematicians.
    Mundus vult decipi
    adaptivecomplexity
    Wow, you read this thing fast!  Computers are great. In fact, they are transformative, but, like you say, we need to use them properly.
    I think there's some analogy here with computer games. Back when graphics sucked, game programmers thought a lot more about game play, and made great games. Today, glitzy graphics replace thinking deeply about game play. Technology has become a substitute for thinking deeply about a problem.
    Mike
    Gerhard Adam
    Without extending the comparison too far, this is a problem that exists with most technologies.  Calculators are a great tool, until they become substituted for basic arithmetic skills.  Special effects in movies are great, until they become the focus of the story.

    Similarly the ability to generate graphs from points in a spreadsheet is a great tool, until the graph becomes an existence in its own right.

    In the end, its a situation that legitimately raises the question of whether the widespread use of any technology is more productive, or if everyone is simply busier.  Don't get me wrong, there's some obvious advantages, but one can't help but make the comparisons of the usefulness of e-mail, until an individual is spending an hour or two a day reviewing spam and irrelevant/unimportant messages.
    Mundus vult decipi
    adaptivecomplexity
    The key is that we should still be thinking - about our research problems, about our correspondence, about whatever, just as deeply as previous generations did. Game developers (to take it back to that example) should still be thinking just as deeply about game play as earlier developers did.
    In science, all of this computational power and (in biology) high-throughput data will generate something that you can probably publish somewhere. But without self-criticism, you're just doing something that looks like science but doesn't really advance the field.
    Mike
    Killing science? Come on now. There is a big difference between data analysis and testing a well-formulated hypothesis. Data analysis per se can be misleading, sure. But that's the beauty of science: somebody else will inevitably come forward and do a proper analysis. More high-profile the original mistake, more incentive for some young Turk to come in and make a name for him/herself.

    Here's a good example in network science: http://www.ams.org/notices/200905/rtx090500586p.pdf Barabási makes some very strong conclusions based on data that he didn't understand properly. It was published in a high profile journal and generated a lot of attention. Including the attention of people who actually understand the system. They write a paper that analyze the data properly and refute Barabási's claims of scale-free structure of internet. Problem solved. By the way, one of the authors of the paper has a pretty good interview about network science: http://www.acm.org/ubiquity/interviews/v10i8_alderson.html

    The bigger point, it seems to me, is the question of causal inference from observational data. Andrew Gelman has a recent post about this and more: http://www.stat.columbia.edu/~cook/movabletype/archives/2009/08/economet...

    adaptivecomplexity
    They write a paper that analyze the data properly and refute Barabási's claims of scale-free structure of internet. Problem solved.


    The problem may be solved, but no progress has been made. In my field, the literature is literally flooded with models that aren't worth testing, and whose authors aren't interested in having the models tested.

    Even worse is a self-reinforcing culture of delusion. Computational papers often cite other computational papers of having shown some result that has in fact not been empirically demonstrated. And then you get news stories like the Science magazine pieces I cited.

    I'll admit that there's some hyperbole in the title.
    Mike
    Hfarmer
    After reading your third paragraph I could not help but be reminded of the dark matter debate in cosmology.  Much of the "evidence" for dark matter comes from computer models.  As once critic put it, anyone can twiddel the knobs on their machine and produce anything they like.  I have even seen people want to numerically solve equations which could be solved analytically (but not simply).  
    Science advances as much by mistakes as by plans.
    Excellent article!

    I know at least one other field that is plagued by cargo cult science: geophysics
    Same issue here: a blind trust in models.
    I'm taking the safe bet that this field will "burn down" before the midle of the century.

    Climate models anyone?

    adaptivecomplexity
    It all depends on how much the modelers care about testing their idea. I think the new computational tools out there offer great opportunities. If you go into modeling understanding that you need to produce testable hypotheses, i.e. opportunities to prove your model wrong, then you're at least starting on the right track.
    In every field - geophysics, biology, climate science, economics, etc. there are people who understand this, but the literature gets swamped with papers by people who don't get it. 
    Mike
    I agree with you 100% on the problem, but disagree 100% on the prediction. :) Too many people are easily seduced by models, and too many people don't have a proper appreciation for rigorous testing, for that brand of "science" to ever go away. The business world is literally FILLED with people who can't tell apart valid tests from invalid ones... and the producers of those models are very rarely held to account, simply on the basis of the fact that they can wave around their statistics degrees and proclaim that they're smarter than the average Joe. Sad, but true.

    Nice piece.
    Though I think the title is a little bit too strong, I have nothing to complain about the content.

    Even though I'm doing my doctoring thesis in a related field ( ''econophysics'' - I really hate this name) I agree with you. If you read through the Quantitative Finance section of the ArXiv you'll be amazed with how many crackpots like this field and how many ''physicists'' claim to know the solution for the current financial crisis. But even if you discard the crackpots and focus on the serious papers you'll find many of them contaminated by that notion that just fitting some power laws here and there are enough to make a contribution to the field.

    So, the histogram of some random variable fits a power law. So what? This is old news. We already know that it is very likely that a system with a complex nonlinear dynamics will have some fat-tailed distribution. This is just the expected behavior. This was news 20 years ago.

    Complex Systems was a very exciting promise in the 90's but, with rare but remarkable exceptions, most papers and books I've read that insistently advertise being about ''complexity science'' are very confuse, lacks precision and methodology, lacks strong conclusions, lacks experimental data and verifiable prediction.

    Surely there is a lot of serious people doing serious research in this area. But in my field of research (''econophysics'') most papers I've read that are really interesting are always from the same handful of research groups and often from people who collaborate with real economists.

    There isn't really a field called econophysics in my opinion. There is economics. And there are some physicists, like me, who are interested in modelling that stuff. That's all there is to it. What we do is not really different from what some economists are doing for decades now, and if physicists would only read the adequate literature they would know that. Of course we know some interesting techniques they don't know and would find useful. Of course we can present some alternative theoretical and mathematical points of view. But we are doing the same stuff and the name of the this field is economics.

    I like to play with my colleagues saying that I'll make a script to crawl every table of numbers on the web and automatically fit a power law, fill in the blanks in a standard pre-made paper and submit it to Physica A. My paper count will raise dramatically.

    adaptivecomplexity
    Surely there is a lot of serious people doing serious research in this area. But in my field of research (''econophysics'') most papers I've read that are really interesting are always from the same handful of research groups and often from people who collaborate with real economists.
    I have the same complaint. Really, I love physicists, and I think there are some fantastic tools out there for studying complex systems. But we're forgetting our scientific roots. Biologists and economists aren't stupid - they've done a good job identifying key questions in the field. They can now benefit from the tools that physicists have to offer, but physicists coming into the field need to become biologists and economists, not physicists dabbling in other fields. The first generation of molecular biologists are a great example of physicists who transformed themselves into first-rate biologists.
    Mike
    Florian
    "but physicists coming into the field need to become biologists and economists, not physicists dabbling in other fields." 

    So true. This is exactly what happened in geosciences.
    Physicists who came to field never became geologists. Worst, they generally ignored and dumped previous work done by geologists. So there used to be a rivalry between geologists and geophysicists, but that rivalery seems over by now, as the later totally dominate the field. The issue is that geophysicists lack of critical thinking regarding their models.
    In the 60s, at the eve of the tectonic revolution, geologists were KO by the new paleomagnetic results proving the reality of continents mobility, a reality that they denied for 30 years.  So geophysicists litrerally hijacked the field. At this time, they had the choice between two paths to explain the apparent wander of continents: plate tectonics and earth expansion. They choose the wrong path because they were reasoning like physicists, not like geologists. Sam W Carey, a brilliant australian geologist, fought as hard as he could to put them back on the right track, but did not succeed. Some geoscientists like Giancarlo Scalera, James Maxlow, and some others are still working hard on the expanding earth theory, but that beautiful unifying  theory is mostly the playground of all sort of cranks by now,  and  backpedling seems impossible for the rest of the geoscience communauty. In consequence, the entire field is going to the wall.


    That's sad for Science 
    Thanks for this article, it puts into clear language many reservations I have long had about this kind of 'research'. Technology certainly contributes to its proliferation, but as an attitude of mind it predates modern computing. For example, this backwards way of thinking has dominated sociology for many years: indeed, perhaps it IS sociology.

    Sad to see a WIkipedia link infecting an otherwise admirable piece of serious writing though. Try this:

    http://www.lhup.edu/~DSIMANEK/cargocul.htm

    Hank
    Wikipedia is changing to moderation, which should pretty much kill it.   Your print version link is just better anyway.
    adaptivecomplexity
    Wikipedia had an html and a PDF link, so that's why I liked to Wikipedia. I should have just linked to the PDF of the print version.
    Mike
    Ni!

    Why does everywhere there's somebody without a clue being strongly prejudiced against Wikipedia?

    1st there's nothing fundamentally wrong with citing wikipedia as a secondary source for reference to a concept, and even the practical objections go away, if you care, by citing a specific version of an article, which is quite easy to do: use the permanent link anchor on the left menu.

    2nd Wikipedia is not turning to moderation; it does have issues with its current size and level of articles, but is striving to become even more open even if that does not seem to surface; stop blindly following the tabloids who blindly follow the very kind of pseudo-science criticized here.

    Michael your article is great, the title is perfect - it's ok to be hyperbolic to be anecdotal. Me and Rafael, from a comment above, were recently discussing this very issue on the corridor of our department. He's the "econophysics" guy and I'm the "networks" guy around here.

    Hugs,
    ale
    ~~

    Hank
    2nd Wikipedia is not turning to moderation;
    Jimmy Wales, the founder of Wikipedia, disagrees with you.
    Ni!

    I'm not starting "yet another get to know before you get to criticize wikipedia" debate here, so this is my last message on this thread, but here are a few points:

    1st You seem to think Jimmy Wales has executive power over wikipedia policy. That is plain False.

    2nd The link you sent is unhelpful, as the discussion about the Flagged Revisions feature was archived in Jimbo's talk page months ago. But since I read that discussion at the time I can comment on it without digging the archives. He at first supported a strongish version of the feature for Biographies of Living People, but later supported the choice made by the community of using a weaker kind of flag, which is going to be implemented sometime soon.

    3rd The German version of Wikipedia has a strong version of Flagged Revisions active for over a year now and is doing great.

    4th You seem to think Flagged Revisions equals moderation, which is not true. In its flavor chosen for the English Wikipedia it actually takes away what little moderation there was before, as it will substitute protection and semi-protection and is more open than both.

    5th I was actually with Jimbo last week at Wikimania 2009 in Buenos Aires, not for the first time, and I can pretty much assure you that, even if he had the power to enforce such a change, he definitely does not favor moderating Wikipedia.

    For further amusement and information, see Wikipedia:Flagged protection and patrolled revisions

    Hope this was useful.
    Hugs,

    ale
    ~~

    adaptivecomplexity
    Thanks for your kind comment.
    Just to clarify, my intention was not to cite Wikipedia as even a secondary a source - I often like linking to Wikipedia as a source for further links, and not necessarily the article itself. The links at the bottom of many of the pages are a good start for further reading. Even if the article is just a stub, you can often find good links there.
    Mike
    Interesting article but I think it is way too general by encompassing fields that do not follow this model. I work in systems biology where as a general rule you cannot publish work unless it has been validated by laboratory results. I am unfamiliar with other areas discussed in this article, but recent history has shown using a reductionist (simplification) approach to gene signaling yields significant failures. These failures have only been overcome by applying a complex systems model when designing a protocol to test the signaling hypothesis. Maybe the author could provide examples in systems biology that support his model.

    adaptivecomplexity
    I work in systems biology where as a general rule you cannot publish work unless it has been validated by laboratory results.
    Being somewhat of a systems biologist myself, I've got to heartily disagree with this. I work on yeast cell cycle transcription, and it's amazing how many things about the yeast cell cycle have been 'shown' computationally but never actually tested. It's true that these studies often base their models on expression profiling and ChIP-chip, but after using these genome-scale data sets to build their network models, many modelers leave it at that. Which is why one group that does experiments took a swipe at several papers that didn't bother to test their models. I'm OK with modeling-only papers, as long as they are a step on the path towards actual experimental tests. But I've attended too many talks by computational/systems biologists who spend their careers just zipping along from model to model, without any apparent interest in checking how well they're doing.
    Mike
    I agree with the general point that you are making -- contemporary scientists do build larger, more complex models than before, and flashiness does come to replace deep thinking in many cases.

    It is worth noting, however, that some large & complex models are being tested against data. In particular, the Vespignani group whose work is mentioned in the Science article were using their epidemic model to make "realtime" predictions about the H1N1 flu epidemic this earlier year. They calibrated and recalibrated their model to available data and made its forecasts publicly available over here: http://www.gleamviz.org/ .

    adaptivecomplexity
    Thanks very much for the link. 

    Part of the problem in the Science news story was poor reporting, at multiple points.  The Vespigiani group's current work should have been included in the story. Testing the model predictions on a new epidemic was the kind of thing I was hoping to see.

    Mike
    I love the ending of this article: 'get serious about the scientific method'? Are you kidding? Show me the scientific method ANYWHERE in mainstream science in the last 50 years. Point me to the experiments behind String Theory. It's a nasty, cult-powered, desperate fantasy. What is the scientific method after all? Most of science turned against it when it didn't fit the quantum behavior problems. But seriously, all it is is an application of LOGIC, which the Greeks gave us ages ago, so putting science in its name and claiming it as a provenance is laughable. The root of this discussion is really rhetoric: logic tell us how to test hypotheses, but not how to come up with them. Scientists make it seem like just poking around and doing some experiments will lead the way. The efficacy of this approach is no better illustrated than in the history of one of the newest sciences: nutrition. Scientists noticed that serum cholesterol levels were higher in people with heart disease. So they came up with a theory: sludge accumulating in the veins. Cholesterol is in food, don't eat it (turned out to be preposterous AND wrong), otherwise, this sludge will cause a heart attack. The low fat diet that was concocted to solve the problem was put to a scientific method test that was stopped half way through because MORE people were dying. Then, lo, 20 years later, some other people discover inflammation and the whole model of the heart attack (nation's #1 killer) disintegrates in a half second. The story on hormones is even more pathetic. The amazing thing about all of these cases is that scientists don't seem to realize that when they tell people to go with their theories, and then they are wrong, they um, CAUSED their deaths (trying to stick to the SM).

    I have a prediction for the author: I do believe that he is right that there will be one survivor. My prediction is that it will be the other camp. Nova had a great episode about epigenetics. While the science guys are still lauding the likes of Craig Venter (owes more to Barnum than Newton), this show was about a nice Swedish guy who took harvest records back 10 generations and made a huge discovery about diabetes (one of the up and coming epidemics). My question: what even makes him a scientist? Nothing. Frankly, he's a historian. And historians all over the place are starting to get their hands on Bayesian Inference tools and the result is already clear: the science citadel has already fallen.

    Hank
    I have a procedural question.   If Seed magazine links to your article

    seed magazine zeitgeist

    and the column is called Zeitgeist, do they still have to put a dollar in the jar ...


    ... even if they're showing you some link love?   This is a whole new layer of zeitgeist.  If so, let me know and the next time I am in mid-town Manhattan I will hit up Adam Bly for a buck.   And get a picture with Bloggy!   I am told he is a pretty good sport so he'd probably go for it, right?
    adaptivecomplexity
    The fact that they use Zeitgeist as the heading for a whole section says a lot. Bly should put way more than a buck in the jar.
    He should put a buck in my jar for changing my title!
    Mike
    Only want to say that this is an excellent article, and I couldn't agree with you more. I have covered this research as well, and came to a similar conclusion (here: http://bit.ly/w2nFR and here: http://bit.ly/4sY37U)

    adaptivecomplexity
    Thanks for the link - your blog looks very interesting.
    Mike
    I would agree with you on many points (particularly the shallow nature of the network research by Barabasi). I would take issue with your title, though, and the underlying assumption that whatever Science and Nature choose to cover in a field represents the entirety (or even the best) of that field. Social scientists, epidemiologists and the like had been conducting inferential analysis on social networks for decades before the physicists decided to dabble in it. This work has sometimes (although not always) has had much more of a flavor of generating and testing hypotheses to it. Probably not as much as you'd like, but certainly more so than in the observational social and behavioral sciences generally, and definitely more so than in the recent highly publicized crop of network models by physicists. None of this work gets into the popular science press because it's just not as "sexy" as a physicist presenting a grand (untested, and patently ridiculous) theory about how all social networks follow one underlying rule. This more established, and more inferential, version of network analysis continues unabated, has had strong influences in various branches of sociology, infectious disease epidemiology, and anthropology, and will likely continue to do so long after the scale-free nonsense is forgotten.

    Do also keep in mind that in fields like infectious disease epidemiology, it is in many cases utterly impossible to test hypotheses about crucial topics such as vaccine strategies or behavioral interventions. It simply cannot be done, for obvious ethical reasons. Collecting behavioral and biological data, and building the best models possible given the existing data, is sometimes the best that can be done. It may not match your definition of science, but it is a necessary branch of research. This isn't meant to contradict your critique of the specific studies in this article, because I agree with you that they didn't even really rise to this standard.

    adaptivecomplexity
    Social scientists, epidemiologists and the like had been conducting inferential analysis on social networks for decades before the physicists decided to dabble in it. This work has sometimes (although not always) has had much more of a flavor of generating and testing hypotheses to it.
    Sure, I agree that some good stuff has been done, especially before it became the trendy thing to do. I also agree with you about epidemiological models and experiments - ethically, you can't do the experiments to test your models, but nature does a lot of the experiments for you (which is true of a lot of non-lab-based fields). Someone who builds a model on one set of epidemiological data needs to test the predictions of the model on a future outbreak (as the Vespigiani group is doing, according to one of the comments posted above). In my particular field, the problem certainly goes beyond news articles in Science and Nature. Network/systems biology really has a cultural problem, and needs more rigor when it comes to testing our models.
    Mike
    Thanks for that reply, and glad we agree. I don't know systems biology but can imagine that the issue there is somewhat different than in social sciences and epi. In either case, I guess I'd just caution you against using grand titles like "Networks are Killing Science". I've seen some friends who've been doing solid social network research for two decades having to constantly defend their work of late against people who've never read it, because they've just read the high-profile work that is crap, and assumed that the whole field is crap. Anyone who just sees your title will have that impression confirmed.

    MarshallBarnes
    I don't have a dog in this one way or the other. What I find interesting is this comment: "McCulloh says that Carley's metanetwork software helps them find clues and patterns—boosting the chances of catching or killing insurgents".



    The key is what the clues are, which we aren't told, and if the insurgents that they are catching or killing are neutralized by taking action on those clues. We don't know. The current military is so screwed up that there's no way to guess one way or the other. I'm just taking a personal interest in the story because I had been suggesting to military connections that I have that they use video surveillance to get insurgents as far back as 2004. I shouldn't have had to do it as In think it's pretty obvious. The article cites 2006 casualty figures, so there you go. The basis of using video to find clues and patterns has merit, but of course requires feedback containing correlations with those clues and patterns, to know that it's working.  Sageman is wrong in stating that there's no way to know. For example let's say that after reviewing x number of hours of this footage, a number of the same vehicles are always in the area before an IED attack. A plate numbers are lifted from the vehicles and they can track down who owns them. They put surveillance on those individuals and find out that known suspected terrorists are hanging out with them. Further investigations yield the identification of a network of safe houses and bomb making hideouts. The network and its operatives are retired. That would be an example where Sageman would be wrong. However, we're not in a position to know if any of this is how they're doing it or not and Sageman probably isn't either or he would have said that they're not catching the bad guys that way, whoever Sageman is (I couldn't get the article).
     
    Too bad. I'd like to know myself...


    adaptivecomplexity
    The key is what the clues are, which we aren't told, and if the insurgents that they are catching or killing are neutralized by taking action on those clues. We don't know. The current military is so screwed up that there's no way to guess one way or the other.
    I'm not sure I'd expect the military to reveal just what particular clues they're using, and how effective those clues are, just for reasons of security. They very well may not be doing this kind of thing effectively, but I wouldn't expect them to make the details public. But it does raise the issue of how predictive the computational models are, as you discuss. Without feedback you don't know how well your models are doing. It wasn't quite clear to me, from the article, where the scientists were getting information about success rates, but if they're just relying on media reports, I don't trust their claims of success, because, as you say, we don't know what strategies the military has implemented.
    Mike
    Arguments in favour of "Networks are killing science":
    -Correlation is not causation.
    -Information is less than knowledge. Data should lead to information, information should lead to knowledge, knowledge should lead to wisdom, wisdom should lead to the truth.
    -Just fitting the data is not understanding it.

    Some arguments against "Networks are killing science":
    -All models are wrong some are useful - George Box (and some are more useful than others)
    -Complex systems and also computer simulations need not be about the world as it is, they can also cover 'the world as it could be'.