Scientists, of course, have a general common goal of understanding all of material reality. Each one of us has a limited range of personal experiences, and we can only conduct so many experiments in a lifetime, but we try to listen to other scientists in order to get a better idea of what else material reality might involve beyond the experiments we have dealt with on our own. The broader the range of approaches we use to study a given phenomenon, or to study a range of phenomena, the better we can understand the whole of reality.
Up until recently, much of scientific methodology was based on a particular scientific lens known as reductionism. Through this lens, scientists try to understand reality by breaking it down into component parts, and then studying the behavior of those parts. A huge range of experimental methods and tools can be employed to study reality using this lens, so as a sort of meta-methodology it has been wildly successful. Reductionism has been the basis of probably most scientific fields, and has led to an impressive range of discoveries and technologies. Its elegance lies in the fact that getting rid of all the "other stuff" making up a particular phenomenon allows us to get back to proximal causation bringing about a small part of that phenomenon. Ideally, scientists get down to the point where they select factors they think are involved in the event, essentially making themselves the proximal cause of that small part of the phenomenon. The level of control involved in pulling apart phenomena in this way implies that technology readily flows from scientific study, and so phenomena that can be dissected in this way tend to become the most lucrative technologies. (Perhaps this explains why otherwise reasonable individuals are perfectly willing to believe in electricity but not in macroevolution -- they simply can't figure out how to make any money from it?)
One of the limitations of reductionism is actually one of its most useful assumptions: that the parts of a phenomenon are independent. This assumption suggests that it is perfectly valid to look at all of the parts of a phenomenon in isolation from each other, and that merely combining the understanding of all those parts will lead to an understanding of the phenomenon as a whole. There are obvious cases where this is clearly absurd. For example, it doesn't seem entirely fair to look at the expression of every gene in an organism as if it is truly independent from all the others. They all, after all, have a shared evolutionary history, are found in the same individual, and many have direct influences on each other. Even using traditional methods to correct for multiple statistical tests doesn't seem to fix this problem.
A more familiar example of this difficulty is the experience of consciousness. It seems incredibly difficult to predict consciousness as an outcome of physics, biochemistry, and cell biology -- none of what we know about the parts of the nervous system, in isolation, seem to be terribly predictive in terms of explaining how this phenomenon occurs. How can we explain this and similar sorts of emergent properties?
This very question helped lead to the development and use of systems theory. Instead of breaking wholes down into parts, this scientific lens focuses one's attention on the connections and/or flows of material or energy between the parts. In other words, this is more of a synthetic approach than an analytical one. It tends to be more conceptual in nature, with a major focus on designing mathematical and computational models of phenomena to see where unusual behaviors start to develop in the system as a whole. Comparisons are made between the models and the natural systems to determine how well the model explains the observations. Such approaches have led to new ways of thinking about ecology and biology, but for the time being, the preponderance of researchers in this area have strong engineering or physics backgrounds.
Of course, systems thinking also has its drawbacks and can never be used entirely in isolation from reductionist approaches. Obviously, without at least a rudimentary understanding of the parts, systems thinking is doomed to fail. Furthermore, because of the potentially huge number of connections between parts in even a moderately-sized system, the data collection phase for understanding the system at baseline can be quite intensive. Connections can be difficult to measure because they are transient, or because flows from one part to another are simply inherently difficult to measure. For example, if a cryptic mycorrhizal species is discovered in a forest ecosystem, an entirely new study might be needed to determine matter or energy transfer rates between the mycorrhizal species and any species it interacts with before a viable model of the ecosystem could be generated. This sort of work could take a few years to complete in and of itself.
Despite the difficulty of carrying out science from a systems perspective, it is very important that we do just that. As a species with huge material and energy transfer rates, we have an incredible ability to affect a broad set of aspects of the environment, such as species diversity, global climate patterns, and ocean pH levels. To study such phenomena in isolation from each other is unlikely to reveal the feedback patterns we badly need to better understand in order to ensure our future survival. There are good arguments to be made, too, that systems thinking is not merely a survival tool for the environment, but also for the increasingly globalized economy.
And of course, it would be pretty neat to know things like how plants integrate environmental signals to generate a specific response to a suite of environmental constraints. Or, for the more philosophical among us, to know from a systems perspective how Decartes came up with his famous statement, "I think, therefore I am."