This is a comprehensive test post to verify that the LaTeX integration (via remark-math and rehype-katex) is active, correctly styled, and matches the minimalist pure-page typography.
When writing academic or engineering blogs, it is critical to have elegant, perfectly typeset math. For example, inline math like E = m c 2 E = mc^2 E = m c 2 or α = π 2 \alpha = \frac{\pi}{2} α = 2 π should flow naturally with the text line height, not disrupting paragraph spacing.
Below is a stress-test of various LaTeX expressions.
1. Integrals and Limits
Block math should sit elegantly on its own line. Here is the Fourier Transform:
f ^ ( ξ ) = ∫ − ∞ ∞ f ( x ) e − 2 π i ξ x d x \hat{f}(\xi) = \int_{-\infty}^\infty f(x) e^{-2 \pi i \xi x} dx f ^ ( ξ ) = ∫ − ∞ ∞ f ( x ) e − 2 π i ξ x d x
And the definition of the derivative:
f ′ ( a ) = lim h → 0 f ( a + h ) − f ( a ) h f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h} f ′ ( a ) = h → 0 lim h f ( a + h ) − f ( a )
2. Summations and Products
The Taylor Series expansion of e x e^x e x :
e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 + x 3 6 + ⋯ e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots e x = n = 0 ∑ ∞ n ! x n = 1 + x + 2 x 2 + 6 x 3 + ⋯
An infinite product formula for π \pi π :
π 2 = ∏ n = 1 ∞ 4 n 2 4 n 2 − 1 = ( 2 1 ⋅ 2 3 ) ( 4 3 ⋅ 4 5 ) ( 6 5 ⋅ 6 7 ) … \frac{\pi}{2} = \prod_{n=1}^{\infty} \frac{4n^2}{4n^2 - 1} = \left( \frac{2}{1} \cdot \frac{2}{3} \right) \left( \frac{4}{3} \cdot \frac{4}{5} \right) \left( \frac{6}{5} \cdot \frac{6}{7} \right) \dots 2 π = n = 1 ∏ ∞ 4 n 2 − 1 4 n 2 = ( 1 2 ⋅ 3 2 ) ( 3 4 ⋅ 5 4 ) ( 5 6 ⋅ 7 6 ) …
3. Matrices and Vectors
We can test complex matrices:
det ( A ) = ∣ a 11 a 12 … a 1 n a 21 a 22 … a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 … a n n ∣ \det(A) =
\begin{vmatrix}
a_{11} & a_{12} & \dots & a_{1n} \\
a_{21} & a_{22} & \dots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \dots & a_{nn}
\end{vmatrix} det ( A ) = a 11 a 21 ⋮ a n 1 a 12 a 22 ⋮ a n 2 … … ⋱ … a 1 n a 2 n ⋮ a nn
A standard rotation matrix in R 2 \mathbb{R}^2 R 2 :
R ( θ ) = [ cos θ − sin θ sin θ cos θ ] R(\theta) = \begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix} R ( θ ) = [ cos θ sin θ − sin θ cos θ ]
4. Mathematical Environments (Cases, Aligned)
The absolute value function defined via piecewise cases:
∣ x ∣ = { x if x ≥ 0 − x if x < 0 |x| =
\begin{cases}
x & \text{if } x \ge 0 \\
-x & \text{if } x < 0
\end{cases} ∣ x ∣ = { x − x if x ≥ 0 if x < 0
An aligned equation block for step-by-step derivations:
( x + y ) 3 = ( x + y ) ( x + y ) 2 = ( x + y ) ( x 2 + 2 x y + y 2 ) = x 3 + 3 x 2 y + 3 x y 2 + y 3 \begin{aligned}
(x+y)^3 &= (x+y)(x+y)^2 \\
&= (x+y)(x^2 + 2xy + y^2) \\
&= x^3 + 3x^2y + 3xy^2 + y^3
\end{aligned} ( x + y ) 3 = ( x + y ) ( x + y ) 2 = ( x + y ) ( x 2 + 2 x y + y 2 ) = x 3 + 3 x 2 y + 3 x y 2 + y 3
5. Gradient Descent (Machine Learning)
Testing subscript alignment and gradients:
θ t + 1 = θ t − η ∇ θ L ( θ t ; x ( i ) , y ( i ) ) \theta_{t+1} = \theta_t - \eta \nabla_\theta \mathcal{L}(\theta_t; x^{(i)}, y^{(i)}) θ t + 1 = θ t − η ∇ θ L ( θ t ; x ( i ) , y ( i ) )
6. Attention Mechanism (Transformer)
The core equation of modern LLMs:
Attention ( Q , K , V ) = softmax ( Q K T d k ) V \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V Attention ( Q , K , V ) = softmax ( d k Q K T ) V