← Back to Blog

Testing LaTeX Math Rendering

|
Published in Mar 2026

This is a comprehensive test post to verify that the LaTeX integration (via remark-math and rehype-katex) is active, correctly styled, and matches the minimalist pure-page typography.

When writing academic or engineering blogs, it is critical to have elegant, perfectly typeset math. For example, inline math like E=mc2E = mc^2 or α=π2\alpha = \frac{\pi}{2} should flow naturally with the text line height, not disrupting paragraph spacing.

Below is a stress-test of various LaTeX expressions.


1. Integrals and Limits

Block math should sit elegantly on its own line. Here is the Fourier Transform:

f^(ξ)=f(x)e2πiξxdx\hat{f}(\xi) = \int_{-\infty}^\infty f(x) e^{-2 \pi i \xi x} dx

And the definition of the derivative:

f(a)=limh0f(a+h)f(a)hf'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h}

2. Summations and Products

The Taylor Series expansion of exe^x:

ex=n=0xnn!=1+x+x22+x36+e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots

An infinite product formula for π\pi:

π2=n=14n24n21=(2123)(4345)(6567)\frac{\pi}{2} = \prod_{n=1}^{\infty} \frac{4n^2}{4n^2 - 1} = \left( \frac{2}{1} \cdot \frac{2}{3} \right) \left( \frac{4}{3} \cdot \frac{4}{5} \right) \left( \frac{6}{5} \cdot \frac{6}{7} \right) \dots

3. Matrices and Vectors

We can test complex matrices:

det(A)=a11a12a1na21a22a2nan1an2ann\det(A) = \begin{vmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{vmatrix}

A standard rotation matrix in R2\mathbb{R}^2:

R(θ)=[cosθsinθsinθcosθ]R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}

4. Mathematical Environments (Cases, Aligned)

The absolute value function defined via piecewise cases:

x={xif x0xif x<0|x| = \begin{cases} x & \text{if } x \ge 0 \\ -x & \text{if } x < 0 \end{cases}

An aligned equation block for step-by-step derivations:

(x+y)3=(x+y)(x+y)2=(x+y)(x2+2xy+y2)=x3+3x2y+3xy2+y3\begin{aligned} (x+y)^3 &= (x+y)(x+y)^2 \\ &= (x+y)(x^2 + 2xy + y^2) \\ &= x^3 + 3x^2y + 3xy^2 + y^3 \end{aligned}

5. Gradient Descent (Machine Learning)

Testing subscript alignment and gradients:

θt+1=θtηθL(θt;x(i),y(i))\theta_{t+1} = \theta_t - \eta \nabla_\theta \mathcal{L}(\theta_t; x^{(i)}, y^{(i)})

6. Attention Mechanism (Transformer)

The core equation of modern LLMs:

Attention(Q,K,V)=softmax(QKTdk)V\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V