Why Laplace transforms are so useful
By 3Blue1Brown
Summary
## Key takeaways - **Laplace transforms turn differential equations into algebra**: Laplace transforms are a powerful tool for analyzing dynamic systems because they convert differential equations into algebraic equations, making them significantly easier to solve. [04:11], [10:46] - **Poles reveal system dynamics: oscillation, decay, or growth**: When a system's Laplace transform is plotted on the s-plane, the location of its poles provides qualitative insights into the system's behavior, indicating oscillation, decay, or growth. [03:46], [12:14] - **Derivative property simplifies calculus to multiplication**: A key property of Laplace transforms is that they convert differentiation in the time domain into multiplication by 's' in the s-domain, simplifying the analysis of derivatives. [04:48], [17:51] - **Initial conditions are baked into the transform**: The Laplace transform of a derivative includes a term that accounts for the initial condition of the function, providing a built-in mechanism to incorporate starting states into the analysis. [05:01], [09:58] - **System behavior is a sum of its pole behaviors**: The overall behavior of a dynamic system is a combination of components corresponding to its poles; poles on the left-half plane lead to decay, while those on the imaginary axis lead to sustained oscillation. [13:10], [14:26] - **Partial fractions bridge transform to time-domain solution**: By decomposing the Laplace transform into partial fractions, each corresponding to a pole, one can invert the transform to find the exact time-domain solution of the original differential equation. [15:31], [16:16]
Topics Covered
- How Laplace Transforms Turn Calculus Into Algebra
- Poles in the S-Plane Predict System Behavior
- Why Systems 'Wibble' Before Finding Rhythm
- Unpacking the True Purpose of Laplace Transforms
Full Transcript
I want to show you this simple simulation that I put together that has a mass on a
spring, but it's being influenced by an external force that oscillates back and forth.
Now, if there was no external force and you pull out this mass and you just let it go,
the spring has some kind of natural frequency that it wants to oscillate at.
But here, when I'm adding that external force, like a wind blowing back and forth,
it oscillates at a distinct, unrelated frequency.
What I want you to notice is how in the beginning,
you get this very irregular looking behavior.
It gets kind of stronger, and then weaker, and then stronger again,
before eventually it settles into a rhythm.
What specifically is going on there?
How could you mathematically analyze what exactly this weird,
wibbly startup trajectory is, and could you predict how long it takes before the
system hits its stride?
And when it does hit that stride, could you predict
exactly how big the swings back and forth are?
One of the most powerful tools for studying systems like this and many,
many others is the Laplace transform.
And today, I want to show you exactly how it looks to use this
tool to study differential equations and analyze dynamic systems.
For context, this is the third chapter in a sequence all about this Laplace transform.
And as a very quick recap, in the first chapter,
you and I became acquainted with functions that look like e to the s times t,
where s is a complex number, and the output of these functions kind of spirals
through the complex plane.
The key concept you have to have in your mind is what engineers call the s-plane,
which is the complex plane representing all possible values of this term s in the
exponent.
The idea is to think of each point of the plane as encoding the entire
function e to the s times t, and the primary takeaway is that bigger
imaginary values of s correspond to functions with more oscillation,
then negative real parts reflect decay, and positive real parts reflect growth.
The reason we care is that many functions in nature can be broken down into
exponential pieces, so what we want is a machine that exposes how that breakdown looks.
This is the key motivation for what Laplace transforms even are,
and we unpacked that in some detail through the last chapter.
Again, in the spirit of a quick recap, the rough way this transform looks
is that it takes a function of time and translates it into a new language,
turning it into a new function whose input is this complex number s.
The key conclusion from last time is that if your function really can be broken into
exponential pieces, then when you plot this new transformed version over the s-plane,
poles in that plot correspond to the exponential pieces hiding inside the original
function.
Symbolically, this amounts to two key properties that I want you to remember.
Number one, if you pump in an exponential function, something like e to the a times t,
it transforms into one divided by s minus a, an expression which you should see in
your mind's eye as a function over the s-plane with a pole above the value a.
Number two, the transform is linear.
What this means is if you have a scaled sum of functions and then you transform them,
it's the same as applying the transform to each individual function
and taking that same scaled sum of the results.
So for example, if your function going in really does look like a combination
of exponential terms, then what comes out looks like this sum of fractions,
which you should read in an expression with multiple different poles.
Each pole reflects one of those exponential pieces.
These two properties alone already give you a glimpse of why this is a helpful
tool for getting a qualitative sense for the dynamics of some situation.
If you have some system evolving over time, and with techniques I'll show you shortly,
you're able to find its Laplace transform, then when you see poles in this transform
with imaginary values, that tells you, hey, there's some kind of oscillation.
If those poles have negative real values, that indicates a tendency
to decay towards zero, but any poles with a positive real part
would indicate instability, a tendency to explode away from zero.
Often when you're studying physics, you don't know immediately what function
describes a dynamic system, but you do know a differential equation describing it.
What's neat is that there's a way to go directly from such a
differential equation to the Laplace transform of its solution.
This is useful both as an intermediate step to finding an exact
solution you can circle on your page, but also equally importantly,
it's a meaningful representation of the system in its own right.
The ability to do this is going to rely on a third key property that is
worth remembering, one that explains how exactly Laplace transforms can
convert differential equations into algebra, hence making them easier to solve.
Here's how it looks.
If you take the derivative of some function, little f of t, with respect to time,
and then you take a Laplace transform of that derivative,
the effect is the same as if you had first applied the transform to the
original function, and then multiplied that result by s, at least almost.
There's also this additional term where you subtract off the initial condition,
subtracting the value of your original function, little f, at the time t equals zero.
So in other words, the transform turns differentiation in
the time domain into multiplication over in the s domain.
Now, this should feel very reminiscent of the fact that, for exponential functions,
differentiation in time is the same as multiplication by s,
and it's no coincidence that ultimately the underlying reason is the same.
Now, at first glance, when you look at this rule,
that little minus f of zero term might seem like kind of an annoying
quirk to an otherwise very elegant equation, but really it's a feature, not a bug.
As you apply this to differential equations, this little quirk
means you have a built-in way to account for initial conditions.
Now, I can hear you asking, why is this property true?
Where does it come from?
How exactly is this connected to the idea of differentiating an exponential,
and where does that minus f of zero term come from in the first place?
I will, of course, explain this.
In fact, I can think of three ways to explain it,
but let's postpone those for just a minute and instead dive into how you actually
use this property in practice.
The example I want to show starts with the simple harmonic oscillator,
which is something you and I studied two chapters ago where you might imagine a mass
on a spring.
As a reminder, one component of the force that acts on that mass pulls it towards a
middle position with a strength that's proportional to its distance away from that
middle.
We write that as negative k times x.
And it's also common to include a damping force,
which acts by slowing down this mass's movement with a strength
proportional to its velocity, again using some negative proportionality constant.
And then the algebra always ends up looking very nice if we move
all of these terms to just one side of the equation, like this.
With no further modification, this right here is a very friendly linear equation.
We talked all about how to solve it simply by substituting in e to the st, which,
depending on your perspective, is either a frustratingly unmotivated guess,
or that's just the established procedure you do once you've learned the
fundamental fact that linear equations like this always have an exponential solution.
That's all previous material, but this time we're going to imagine there
is a third force acting on the mass, some kind of external force,
which in our example will oscillate back and forth according to a cosine function,
like a wind with periodic gusts to the left and to the right.
Importantly, the frequency of this external force will generally have
nothing to do with the natural resonant frequency for the spring.
And this is not an arbitrary example that came up for us before on this
channel when we studied why light slows down in a medium like glass.
In that context, once we were deep into the video,
the relevant oscillator was a little charge inside the material,
and the external force was an incoming light wave.
Faced with an equation like this, which is no longer a linear equation,
it's more complicated to solve, here's a preview of the general strategy.
What you do is you take a Laplace transform of all of the terms,
and then you can solve that result to reveal the transformed version of the solution,
and then from there you can invert the process to recover that solution
in our usual language, in the time domain instead of the s domain.
Okay, that's the high-level view, but let's roll up our
sleeves and actually step through this piece by piece.
Following the usual convention, I'm going to write the transform for little
x of t as capital X of s, and then using the rule that we just talked all about,
the transform for its derivative is going to look like s times capital X,
all minus an initial condition, which I'm going to write as little x naught.
And then for the second derivative, what should that look like?
This is actually a good chance to pause and try it out as an exercise.
It basically looks like applying that key rule we just talked about, but twice in a row.
Applying it once, you get that the transform of the second derivative should look
like s times the transform of the first derivative, all minus x prime of zero.
That term is the same as the initial velocity, so I'll write it as v naught.
This is where, in the whole procedure, that part of the
initial condition is kind of automatically accounted for.
And from here, you can substitute in the Laplace transform of that derivative as,
again, s times capital X of s, all minus an initial condition.
We can distribute a couple terms here, substitute it back up in what we had,
and if we bring along those other constants, m, mu, and k for the ride,
we get this kind of large but not wholly unreasonable expression.
And what I want to draw your attention to are these three terms here,
the ones that include a component of capital X of s.
If we add those together and factor out the whole capital X part,
what we're left with is a nice little quadratic polynomial.
What would be very clean and pretty is if that's all we had,
but messing up the elegance is that we have all of these initial condition terms
kind of riding along.
And I say they're messing up the elegance, but again I want to point
out it actually is very nice to have a baked-in way to incorporate initial conditions,
that's not going to be some added step later on.
Nevertheless, for the sake of a clean initial example,
let's assume that both the initial position and the initial velocity are zero,
so our spring on a mass starts off completely stationary.
Keep in mind, for a more general solution, you might want to keep these constants around.
What's nice about ignoring them is that it shines a light on a characteristic pattern of
applying Laplace transforms, where this part of our differential equation,
the left-hand side on the top, basically gets turned into a polynomial that looks like
a kind of mirror image of it, one that has all the same constants,
and where each higher-order derivative, like that x double prime,
turns into some power of s, in this case s squared.
That is really the essence of why this tool works.
Differential expressions turn into polynomials,
and polynomials are something we can do algebra with.
But of course, for this example, what makes it interesting is that we have this
other oscillating force on the right-hand side,
and taking a Laplace transform of a cosine expression is something we talked
all about in the previous episode.
In practice, this is the kind of thing you would either have memorized or look up,
but if you want to pause, I think this is a good chance for an exercise to take a
moment and see if you remember why the transform of a cosine expression should have
two different poles, in this case one pole at omega i,
and the other pole at negative omega i, and take a moment to quickly gut check that
that lines up with the expression you're looking at here,
with this denominator s squared plus omega squared.
What I really want to pop into your mind's eye when you see that in the denominator
is the idea of poles at omega i and omega negative i,
and with a nice intuition of the s-plane, that should feel in your bones like
oscillation with a frequency of omega.
The next step when it comes to just pushing around the
symbols on the page is to divide out by this component here.
And with that, you now have an exact final expression, fully describing,
well not the solution of your system, but the Laplace transform of its solution.
As I previewed, the final step will be to invert the Laplace transform process,
revealing the original mystery function, but before that, even just at this step,
I want you to notice how by seeing this transformed version,
you already get a lot of intuition for the dynamics of the system.
Remember, the key question over in the s domain is where are all the poles,
and this expression has a pole wherever its denominator is equal to zero.
In this example, there are four different values of s that make this denominator zero.
Two of them come from the roots of this polynomial here,
the one I described as a mirror image of the harmonic oscillator equation.
Any of you who watched the previous chapters will remember how this looks.
It amounts to applying the quadratic formula, and as you tweak the constants k,
mu, and m, the roots of that polynomial fall in different places on the S-plane.
But generally, they have a negative real part, and,
assuming the damping coefficient is not too big, they have an imaginary part.
And when you see points on the S-plane like that,
what should pop into your mind's eye is the notion of oscillation with
some kind of decay.
In other words, even when you add this external force to the oscillator,
the solution of the unforced oscillator, what it would do on its own,
is still lurking inside.
Hidden somewhere in there is the oscillation matching
the natural resonant frequency of the spring system.
The other poles of our transformed function come from the roots of this part right here,
which are omega i and negative omega i, and those correspond to the cosine external
force.
In other words, another component of the final dynamics,
really the dominant component in this case, is a tendency to oscillate in sync
with that external force.
And this should feel intuitive.
If you go and push a kid on a swing, but with a frequency that doesn't
necessarily match the natural resonant frequency of the swing,
what ultimately happens to their motion is that they also oscillate
in a way that matches your frequency, not the natural one of the swring.
Here, this might all make a little bit more sense if I return
back to that simulation that I opened with, that has a graph on
the top showing the position of this mass on a spring over time.
Again, notice how that graph has this weird initial startup period where it's sort
of wibbling about finding its stride, but eventually it does fall into that rhythm
and follow this consistent sine wave pattern, synced up with the external force.
What's going on here is that the solution can be
thought of as a sum of two different components.
One component corresponds to those poles on the left half of the S-plane,
and it matches a solution to the unforced equation,
what the spring would do without any external influence.
The other component corresponds to the two poles of the Laplace transform on
the imaginary axis, meaning it's pure oscillation with no growth or decay,
and it's simply a cosine wave matching the rhythm of that external force.
From this perspective, you can recognize how that initial period of wibbling
about corresponds to the time when that first component is still relevant.
You have these two distinct frequencies competing with each other,
and that first one has not yet decayed away into obscurity,
but eventually it does, leaving behind only the pure cosine.
Okay, okay, okay, I hear some of you saying, that's all well and good,
but what is the actual solution?
I have an exam tomorrow, and I need to circle some expression at the bottom of my paper.
Well, if you do want an exact analytic solution,
this next part is not exactly fun, but it is straightforward.
If you have a fraction like this one that we found,
and you know the roots of its denominator, like the four roots that we just discussed,
you can break it up as a sum of four fractions where the denominator of each one looks
like s minus one of those roots.
The work you have to do goes into solving for these constants up in the numerator.
There's a process for it, it's called partial fraction decomposition.
I'm not going to walk through the details, I don't think you want me to walk
through the details, but I'll leave up the key idea as a little on-screen note.
Once you do solve for those constants, because you know that an
exponential term transforms into a simple fraction, like the ones we're looking at,
inverting the process amounts to inverting that one key rule.
You turn each of these fractions into the appropriate exponential term.
So the locations of each pole, those roots of the denominator,
correspond to the values in the exponents sitting in front of the time t.
And those constants that you have to put in the work to solve for
remain as the constants in front of each of these exponentials.
If you're the kind of person who likes homework and enjoys digging into the formulas,
and you choose to take on the challenge of solving for those constants,
there's one very interesting conclusion of that exercise I want you to focus on.
Look at the first two terms, corresponding to the poles +ωi and -ωi.
When you solve for the two relevant constants,
the expressions you get are not quite the same, but if mu is very close to zero,
each one is approximately this shared expression that we can factor out.
Now, you know that two imaginary exponentials like this combine to make a cosine.
So this part you're looking at is that final steady-state
cosine rhythm that the mass eventually falls into.
And inside that big expression that you solve for,
the exercise I want to leave you with as homework is to think deeply
about how the amplitude of this final expression depends on the difference
between the resonant frequency of the spring and the frequency of that external force.
In particular, what happens as both of those frequencies get closer together?
And how might this be relevant to anyone wishing to
build a bridge that they don't want to wobble into ruin?
Stepping out from the trees to look over the forest,
you see what I mean about how Laplace transforms can turn a differential equation into
algebra, and how it's all rooted in this third key property where a derivative in time
turns into multiplication by s.
So naturally, the burning question is, why is this property true in the first place?
And like I said, I can think of three different ways to explain it.
One that's elementary but limited, one that's general but a bit opaque,
and then there's my favorite, which requires a little added theory to describe.
The first one is actually not a complete explanation.
The idea is that any time you see a new formula in math,
it's never a bad idea to just try it out on an example you know well.
That way you build a little intuition.
In this case, what's an example that you and I know very well?
Well, we have emphasized to death the fact that if you pump in an exponential function,
something like e to the a t, then its Laplace transform looks like 1 divided by s minus
a.
So let's see what happens for this example.
You know how to take the derivative of an exponential like this.
It's delightfully simple, you just multiply by that constant a.
And then, because of linearity, this means the Laplace
transform also just picks up that added factor of a.
And at first, this actually seems wrong.
It seems inconsistent with the desired conclusion.
We're not multiplying by s, the input of our new transformed function.
Instead, the thing we're multiplying by is this random constant a
that characterizes what specific exponential we happened to throw in.
But this is really just a matter of some gentle algebraic massaging.
Notice what happens if I add this fraction, s minus a over s minus a,
which is the same as adding 1, so I have to subtract off 1 to account for it.
When you combine these two fractions here, you get some nice
cancellation in the numerator, leaving behind this clean factor of s.
And you'll notice we're now subtracting off something from the whole expression, 1.
And that happens to be the initial condition.
It's what you get if you plug in t equals 0 to the original function we pumped in.
So in fact, it really is consistent with the desired conclusion.
Now of course, this is just one very specific function,
this is not a general explanation, but it holds within it the seeds of much
more generality.
If you like exercises, take a moment to convince yourself that this
result is also true for any combination of exponential functions.
This really just amounts to leaning hard on linearity again,
both linearity of the transform and of the derivative.
This is still not a complete explanation, it only applies to combinations
of exponentials, but to be fair that includes every example we've seen so far,
and an overarching theme of this whole series will be how many,
many things really can be broken into exponentials with the right point of view.
The second explanation I want to at least briefly flash
up here is the one that you'll see in most textbooks.
The idea is to simply pull up the definition of a Laplace transform,
which somewhat bizarrely we actually haven't had to look at ever since the last chapter,
and then to evaluate it, you apply integration by parts.
This is another case where I think it's best to just leave the details on screen
for any curious and calculus-savvy students who want to pause and think it through.
It's a perfectly fine and tidy derivation, really short actually,
but sometimes I feel like whenever you appeal to integration by parts,
you can almost see the intuition evaporating away from the audience in front of you.
And in this case, if you stop and ask yourself where that times s really came from,
or why we're subtracting an initial condition,
both of them kind of feel like things that happened to fall out.
The third explanation, at least in my own head,
is what really shows why the property is not just true,
but woven into the fabric of what a Laplace transform was born to do.
The caveat is that it requires understanding something we haven't talked about yet,
known as the inverse Laplace transform.
This is something you might have already started wondering about.
If you look back at our differential equation example,
in that very last step where we inverted the process to recover the original function,
a perfectly reasonable question to ask would be how is this supposed to
work if you can't necessarily break the result into these clean fractional pieces?
That question is very closely tied to the question of what Laplace
transforms mean if your original function cannot be broken down
into a discrete sum of exponential pieces in the first place.
This inverse transform is a big enough topic that it deserves its own chapter.
For example, it involves a fun new concept for us known as a contour integral.
What I'd like to do with that next chapter is walk through how you could reinvent this
tool for yourself, starting from a desire to create something that has this third key
property we've been focusing on, where derivatives turn into a kind of multiplication.
I think there's a very natural storyline where slowly tugging
on a certain logical thread leads you to inventing both the
Laplace transform and its inversion formula as a unified pair.
Along the way, you also get to see how it relates
to Fourier transforms and Fourier inversion.
If you're feeling up for a jaunt into the deeper theory of the subject,
come join me in the next chapter.
Loading video analysis...