%config InteractiveShell.ast_node_interactivity = 'last_expr_or_assign'
from sympy import *
init_printing()
I’ve been doing my university physics homework in SymPy and Jupyter for a little bit, and I thought I could share a little about the workflow I’ve developed for it. It might come useful for you.
Let’s go through with an abridged example of plasma physics homework with that toolset, and then go over the tricks used as they come up. I’ll put text that I’d put in the homework solution in quotes.
For electrostatic electron waves in unmagnetized plasma, calculate the correction to the frequency due to the ion motion.
We’ll set up our SymPy namespace first. We’ll dump it all into the main namespace; not the best habit for libraries, but here we want to write less code, so it’ll do.
I also added a little IPython magic in there. This is a little behavior I picked up from working in Julia, where the value last expression or assignment in a cell is displayed.
We take the linearized forms in Fourier space of the following fluid equations: 1. equation of motion, for ions, with temperature 2. equation of motion, for electrons, with temperature 3. mass conservation for ions 4. mass conservation for electrons 5. Poisson’s equation
Now, let’s define our variables and the set of equations we care about. Note how we’ll liberally use unicode characters to shorten our code. In Jupyter, you can enter \(\omega\) in code cells by simply writing the LaTeX for it and hitting TAB afterwards: \omega<TAB>
= symbols('m_i m_e gamma T n_0 e varepsilon_0', positive=True)
m_i, m_e, γ, T, n_0, e, ϵ = symbols('omega k', positive=True)
ω, k = symbols('v_i v_e n_i n_e') # to be understood as fourier modes
v_i, v_e, n_i, n_e
= [
equations -I * ω * m_i * n_0 * v_i, +e * n_0 * E - I * k * γ * T * n_i),
Eq(-I * ω * m_e * n_0 * v_e, -e * n_0 * E - I * k * γ * T * n_e),
Eq(-I * ω * n_i + I * k * n_0 * v_i, 0),
Eq(-I * ω * n_e + I * k * n_0 * v_e, 0),
Eq(* k * E , e * (n_i - n_e) / ϵ)
Eq(I ]
\(\displaystyle \left[ - i m_{i} n_{0} \omega v_{i} = - i T \gamma k n_{i} + e e n_{0}, \ - i m_{e} n_{0} \omega v_{e} = - i T \gamma k n_{e} - e e n_{0}, \ i k n_{0} v_{i} - i n_{i} \omega = 0, \ i k n_{0} v_{e} - i n_{e} \omega = 0, \ e i k = \frac{e \left(- n_{e} + n_{i}\right)}{\varepsilon_{0}}\right]\)
See? That’s output right there. Neat! Anyway, > We first solve equations 3 and 4 for the velocities:
= solve(equations[2:4], [v_i, v_e]) v_i_e_solutions
\(\displaystyle \left\{ v_{e} : \frac{n_{e} \omega}{k n_{0}}, \ v_{i} : \frac{n_{i} \omega}{k n_{0}}\right\}\)
We then insert the velocities into the equations of motion:
= [eq.subs(v_i_e_solutions) for eq in equations[0:2]] motion_equations
\(\displaystyle \left[ - \frac{i m_{i} n_{i} \omega^{2}}{k} = - i T \gamma k n_{i} + e e n_{0}, \ - \frac{i m_{e} n_{e} \omega^{2}}{k} = - i T \gamma k n_{e} - e e n_{0}\right]\)
From this, it is simple to grab the number densities:
= solve(motion_equations, (n_i, n_e)) n_i_e_solutions
\(\displaystyle \left\{ n_{e} : \frac{e i e k n_{0}}{T \gamma k^{2} - m_{e} \omega^{2}}, \ n_{i} : - \frac{e i e k n_{0}}{T \gamma k^{2} - m_{i} \omega^{2}}\right\}\)
And now we can insert those back into the final, Poisson’s equation:
= equations[-1].subs(n_i_e_solutions).simplify() poisson_eq
\(\displaystyle e i k = \frac{e i e^{2} k n_{0} \left(- 2 T \gamma k^{2} + m_{e} \omega^{2} + m_{i} \omega^{2}\right)}{\varepsilon_{0} \left(T \gamma k^{2} - m_{e} \omega^{2}\right) \left(T \gamma k^{2} - m_{i} \omega^{2}\right)}\)
This we can solve for the \(\omega^2\):
2_solutions = solve(poisson_eq, ω**2) ω
\(\displaystyle \left[ \frac{\left(m_{e} + m_{i}\right) \left(T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0}\right) - \sqrt{T^{2} \gamma^{2} k^{4} m_{e}^{2} \varepsilon_{0}^{2} - 2 T^{2} \gamma^{2} k^{4} m_{e} m_{i} \varepsilon_{0}^{2} + T^{2} \gamma^{2} k^{4} m_{i}^{2} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} m_{e}^{2} n_{0} \varepsilon_{0} - 4 T e^{2} \gamma k^{2} m_{e} m_{i} n_{0} \varepsilon_{0} + 2 T e^{2} \gamma k^{2} m_{i}^{2} n_{0} \varepsilon_{0} + e^{4} m_{e}^{2} n_{0}^{2} + 2 e^{4} m_{e} m_{i} n_{0}^{2} + e^{4} m_{i}^{2} n_{0}^{2}}}{2 m_{e} m_{i} \varepsilon_{0}}, \ \frac{\left(m_{e} + m_{i}\right) \left(T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0}\right) + \sqrt{T^{2} \gamma^{2} k^{4} m_{e}^{2} \varepsilon_{0}^{2} - 2 T^{2} \gamma^{2} k^{4} m_{e} m_{i} \varepsilon_{0}^{2} + T^{2} \gamma^{2} k^{4} m_{i}^{2} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} m_{e}^{2} n_{0} \varepsilon_{0} - 4 T e^{2} \gamma k^{2} m_{e} m_{i} n_{0} \varepsilon_{0} + 2 T e^{2} \gamma k^{2} m_{i}^{2} n_{0} \varepsilon_{0} + e^{4} m_{e}^{2} n_{0}^{2} + 2 e^{4} m_{e} m_{i} n_{0}^{2} + e^{4} m_{i}^{2} n_{0}^{2}}}{2 m_{e} m_{i} \varepsilon_{0}}\right]\)
Which is the complete analytical answer. as a sanity check, we can take the limit \(m_i \to \infty\). Starting with the first one:
= limit(ω2_solutions[0], m_i, oo) limit1
\(\displaystyle \frac{T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0} - \sqrt{T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}}}{2 m_{e} \varepsilon_{0}}\)
Which, if we help SymPy a little (it didn’t quite notice a pattern in the square root):
This is, unfortunately, a problem that SymPy tends to run into - this kind of issue needs some tinkering sometimes:
= limit1.as_numer_denom()
numerator, denominator sum(arg.factor() for arg in numerator.args) / denominator
\(\displaystyle 0\)
Which is not particularly interesting, corresponding to no oscillation at all. Applying the same procedure to the other solution:
= limit(ω2_solutions[1], m_i, oo).simplify()
limit2 = limit2.as_numer_denom()
numerator, denominator = (sum(arg.factor() for arg in numerator.args) / denominator).simplify() limit2_corrected
\(\displaystyle \frac{T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0}}{m_{e} \varepsilon_{0}}\)
Which is the existing result for the electrostatic electron oscillation for warm electrons. Good! We’re sane. Now, we have an analytic expression, but if we want to grab just the first order correction, we can apply some trickery.
Let’s define
\[M = m_e / m_i\]
This is a small quantity, and we an expand our \(\omega^2\) solutions in a power series in that.
We’ll then replace \(m_i\) in the second \(\omega^2\) solution using \(M\):
= symbols("M", positive=True)
Mratio = (ω2_solutions[1]).subs(m_i, m_e / Mratio).expand().series(Mratio, n = 2); series_expansion
Note how I’m using ;
at the end to stop this from displaying.
We once again apply some SymPy magic to make the series expansion bearable for the eyes:
This took a bunch of StackOverflow searching, which, if you’re on a deadline, may admittedly not be the most productive thing to do:
= collect(series_expansion.removeO(), Mratio, simplify); simplified_series_expansion
\(\displaystyle \frac{M \left(T^{3} \gamma^{3} k^{6} \varepsilon_{0}^{3} + 3 T^{2} e^{2} \gamma^{2} k^{4} n_{0} \varepsilon_{0}^{2} - T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} \sqrt{T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}} + 3 T e^{4} \gamma k^{2} n_{0}^{2} \varepsilon_{0} - 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} \sqrt{T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}} + e^{6} n_{0}^{3} + e^{4} n_{0}^{2} \sqrt{T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}}\right)}{2 m_{e} \varepsilon_{0} \left(T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}\right)} + \frac{T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0} + \sqrt{T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}}}{2 m_{e} \varepsilon_{0}}\)
Applying some further simplification for the square root:
= numer(simplified_series_expansion.args[0]).args[0]
squareroot = simplified_series_expansion.subs(squareroot, squareroot.factor()).simplify() very_simplified_series_expansion
\(\displaystyle \frac{M e^{4} n_{0}^{2} + T^{2} \gamma^{2} k^{4} \varepsilon_{0}^{2} + 2 T e^{2} \gamma k^{2} n_{0} \varepsilon_{0} + e^{4} n_{0}^{2}}{m_{e} \varepsilon_{0} \left(T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0}\right)}\)
And we arrive at the known and expected first order (in \(m_e/m_i\)) correction, substituting our definition of \(M\) back in here:
= (very_simplified_series_expansion - limit2_corrected).simplify().subs(Mratio, m_e / m_i) ion_correction
\(\displaystyle \frac{e^{4} n_{0}^{2}}{m_{i} \varepsilon_{0} \left(T \gamma k^{2} \varepsilon_{0} + e^{2} n_{0}\right)}\)
And that’s that!
Results and thoughts
For some pros and cons:
Positives
- You get to skip out on algebraic errors.
- You get beautiful LaTeX display, at no effort, for each step. This helps a ton if your handwriting is dreadful, like mine is.
- You can resort to first-order corrections etc if you need a check, but you actually get full analytical solutions a lot of the time.
Neutrals
- You really need to watch your assumptions when you define your variables, as SymPy is pretty conservative with its simplifications. This is, of course, both a blessing and a curse.
- The learning curve is a bit steep, but you can accomplish plenty with just a little knowledge; for the rest, you can google. I’ve been putting off picking up its
physics
subpackage for way too long now.
Negatives
- There are kinks, as you undoubtedly saw; it can be tough to tell SymPy “look at that square root! Look at it!”. It’ll miss some seemingly obvious stuff, and it can take a fair bit of googling to get it to work properly.
- Runtimes can get steep-ish; it’s not really suited for live work. I’ll
%timeit
the two lines that caused a little trouble here. Admittedly, symbolic math is hard, so I’m not too surprised.Turns out I could only timeit the first line in reasonable time…
%timeit series_expansion = (ω2_solutions[1]).subs(m_i, m_e / Mratio).expand().series(Mratio, n = 2);
24.3 s ± 351 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Still, all in all, I think it has plenty of potential. If it’s not there yet, it’s getting there! I’m likely to keep using it and if I come up with any further handy tools, I’ll get back here.