It's always satisfying to see concepts in the sciences, which reappear with varied nomenclature across fields and disciplines.  There are countless examples but one that always stood out for me is the concept of Green's functions.This is such an interesting an important concept in all of pure and applied science.  And to explain it will require some exploration of another interesting idea called a delta function and another important concept, linearity.

 What is a Green's function?  Scientists and engineers often work with differential equations or difference equations. to describe the behavior of some theoretical or applied model.  A differential equation is what the name implies - an equation that describes the behavior of the model to small differences of something or some quantity such as time or space.  It's always easier to start with a discrete environment and then move to the continuum via a limiting approach.  

A difference equation is the discrete analog of a differential equation and effectively comprises a recurrence relation, for example: y[n]=c*(x[n]-x[n-1]), where x[n] is some discrete function and c is some constant.  If we know the initial conditions on x[n] (i.e., x[0]), we can determine x[n]n for all n using this recurrence relation.  

A differential equation expresses just such a recurrence relation in the continuous case through the use of derivatives.  For example, dx/dt (f(t))=c*f(t).  This elementary differential equation says that the time derivative of the function is equal to the function itself multiplied by some constant.  The solution is an exponential, x(t)=e^ct.  

The standard spiel on Green's functions is that they represent the inverse of some linear differential operator, which can be used to solve a related differential equation with a source term by invoking Green's identities.  In effect the Green's function undoes action of the differential operator.  If one considers a discrete space, a Green's function is more or less the continuous analog of the inverse of a non-singular matrix.  

But, there's a much more utilitarian and intuitive description that I prefer that relates to the analogous applications of "Green's Functions" in applied sciences such as engineering.  A Green's function represents the "response" of a system to a special type of input - namely a singular type of input.  

What do I mean by a singular input?  I am referring here to a delta function.   Really the term 'delta function' is a misnomer.  It isn't a function at all but rather a mathematical abstraction referred to as a distribution.  Without getting bogged down in the highly technical nature of distribution theory, a delta function can be thought of a mathematical model for something in the world that is highly concentrated in time or space.  That is, this thing occurs or happens over a very small time or space dimension.  For example, if there is an amount of mass concentrated over a very small distance a delta function d(x) represents a function that is sharply peaked at 0.  

It is often useful to start with a specific function such as a Gaussian and imagine the limiting case as the width of that function shrinks.   We imagine a limiting case in which the function becomes more and more sharply peaked and less spread out in space.  In order to get something finite in an integral, as the width of the delta function shrinks, the height must approach infinity.  Some prefer to think of a delta function as 0 everywhere but infinite at one point.  This is actually formally incorrect as the aforementioned "function" is of measure 0 and contributes nothing to an integral.  It is best to think of a delta function as the limiting case of a narrowly peaked function as the width of that function approaches 0.  

Another concept that is often thrown around in pop culture and worth a brief explanation is linearity.  People like to talk about something as linear or non-linear but what does this concept really mean?  It is an important concept to flesh out because it is perhaps the most utilized idea in all of theoretical and applied science.  

In short, we don't know what to do with all but the simplest non-linear systems.  What to do?  Linearize the system over some finite region and solve within that region.  Repeat this procedure ad infinitum to obtain a global solution.  In the loosest sense, linearity means something called the superposition principle applies.  This means that I can study the behavior of the system of interest to different inputs and for each one arrive at a respective outputs.  I can then add the totality of outputs.  This sum of the totality of outputs will represent the response of the system to the totality of inputs.  This is a beautifully simply but remarkably powerful idea that allows one to study a system in simple isolated cases and be assured that the response of the system to more complex inputs is simply the sum of the responses to the simple inputs.  

For those who are a bit more mathematically inclined, linearity in a more formal sense actually implies two things: (1) superposition; (2) scaling.  I already described the superposition principle.  The scaling property is this:  L(a*x)=a*L(x).  This simply means that the output of the system to a scaled input is simply the scaling factor multiplied by the response of the system to the input.  

A slightly hand-wavy but useful way to think about delta functions is this.  We can take any function and decompose it into a superposition or sum of delta functions, one for each point of the function.  In addition, each of the delta functions in the sum multiplied by a weight.  What is the weight?  The weight is the value of the function we are seeking to decompose at the very point it corresponds to.  Again, it is easier to think of a discrete space first.

 Imagine, a discrete function indexed by some variable such as time, which has the value 4 at time 0, the value 2 at time 1 and the value 3 at time 2.  This discrete function can be decomposed into the sum of three delta functions: 4*delta[t]+2*delta[t-1]+3*delta[t-2].  By the way, these are a special type of delta function for discrete variables called Kronecker delta functions, but the concept not the name is the important thing.

Imagine now that the time variable becomes continuous.  Then, the Kronecker delta's will become something called a Dirac delta (which is the continuous analog of a Kronecker delta), all differences will become derivatives and all sums will become integrals.  Thus, a continuous function can be represented as the integral of a linear superposition of Dirac delta functions, each multiplied by a weight representing the value of the continuous function at each respective point.  

So, how does all of this abstraction relate to Green's functions.  As suggested earlier, all that a Green's function represents is the "response" of our system to a delta function.  If we know this kernel of information, we can automatically know the response of the system modeled by a differential equation to an arbitrary source function.  Why?  Because any function can be decomposed into a linear superposition of delta functions, we can correspondingly construct the response of the system we are studying to the linear superposition of the entirety of responses to the delta functions!  

I like to think of it this way.  A Green's function represents the DNA, fabric or structure of the system under study.  It describes the response of the system to a very special input - a delta function, which in some sense is the purest response of the system for all other responses can be determined by superimposing the responses to a collective sum of delta functions.  Once we know the Green's function of a linear differential equation or a system, we know a lot!  It's a small piece of information with grand consequences.  
So how does this relate to the original idea of this discussion - the reappearance of similar ideas across disciplines? Let's look at how Green's functions manifest in various disciplines - but may in fact be called by something else.

Electrical engineers like to invoke the concept of an impulse response. This idea is used heavily in analog and digital circuit theory and analog and digital signal processing.   An impulse response in nothing more than the Green's function of a system, except it is called an impulse response not a Green's function.  The idea has some very powerful consequences relating to something known as the convolution theorem, which tells us that convolution of two functions in the time or space domain is merely multiplication in Fourier space.  But that's another story.  

In signal processing the impulse response represents more or less the DNA of the linear system under study.  With it, we can know the response of the system to an arbitrarily complex function.

In physics Green's functions are called by their rightful name.  They are very important in classical electrodynamics, for example, in solving certain problems in electrostatics and electrodynamics.  I won't discuss these in any depth here, but one very interesting problem relates to notions of causality regarding how the electromagnetic field propagates in space.  One arrives at something unfortunately called the retarded Green's function, which describes how radiation moves through space and time at a finite speed - the speed of light.

Green's functions also manifest in solving many problems in quantum mechanics.  To invoke a rather lofty example, but one that nicely ties together how this concept spans so many discourses in the science, there is the idea of a propagator.  The propagator rears its head in the notoriously abstract area of relativistic quantum mechanics, a.k.a quantum field theory.  In quantum mechanics a particle such as an electron or a photon is described by a mathematical object known as the wave function.  This concept necessarily arises due to empirical nature of things we call particles as waves.  Electrons exhibit wavelike non-local behavior and interference just as classical waves like water waves and the electromagnetic field.  
The wave function captures this idea.  Of course I am omitting the real essence of quantum mechanics, the probabilistic nature of measurement, operator theory, eigenfunctions, the uncertainty relation and all of that for now!  In any case, we'd like to study the dynamical behavior of the wave function - i.e., how it evolves over time.  The thing that describes this is the propagator for the wave function.  The propagator though is really just a Green's function, aptly called a propagator.  And, just like all the examples discussed, it represents the response of our quantum system, more or less, to a delta function input.  

In more physical terms, since the propagator relates to the evolution of the wave function over time, the propagator represents the probability amplitude for a particle to move from one spacetime point to another (the amplitude squared represents the probability).  This brings up something called the Huygen's principle, which attempts to explain how classical waves propagate.  The basic idea is that every point on a wave front functions as the source of a new wave.  By adding the linear superposition of this infinite set of generated waves, the overall propagation of the wave is described.  This is a very useful concept in understanding diffraction of classical waves.

This brings up another useful idea.  The notion of a propagator and Green's functions is merely expressing the idea of causality.  That is, something happening at one time will affect something at a later time.  

I always believe that cross-pollination from different areas of study is extremely productive an enlightening.  It's satisfying to see how the same concepts can be applied perhaps in slightly different forms with different nomenclature across fields and discourses.  Green's functions, impulse responses and propagators all capture particular instances of one unifying idea made possible through the power and simplicity of linearity.