Home/Math/Minkowski Distance Calculator
Math

Minkowski Distance Calculator

p
x
y
x
y
About this tool

The generalised distance metric that contains Manhattan, Euclidean & Chebyshev

The Minkowski Distance Calculator computes the generalised Lp distance between two points. By changing the order parameter p, you get a complete family of distance metrics — including Manhattan (p=1), Euclidean (p=2), and Chebyshev (p→∞) as special cases.

The formula is: d(A,B) = (Σ|aᵢ−bᵢ|ᵖ)^(1/p). As p increases, larger differences contribute disproportionately more to the result. At p=1 all differences contribute equally (Manhattan); at p=2 differences are squared before summing (Euclidean); as p→∞ the largest difference completely dominates (Chebyshev).

Why Minkowski matters:

  • Unifies the distance family — one formula explains Manhattan, Euclidean, and Chebyshev.
  • Machine learning flexibility — kNN with Minkowski distance lets you tune p as a hyperparameter to find the best metric for your data.
  • Fractional p values — values between 0 and 1 create non-metric distances used in sparse data settings.

Example

Points A=(1,2) and B=(4,6), varying p:

p=1 (Manhattan): |4−1| + |6−2| = 3 + 4 = 7.
p=2 (Euclidean): √(3² + 4²) = √25 = 5.
p=3: (3³ + 4³)^(1/3) = (27+64)^(1/3) = 91^(1/3) ≈ 4.498.
p→∞ (Chebyshev): max(3, 4) = 4.

Notice how the distance decreases as p increases — higher p values give more weight to the dominant dimension and shrink the total.

FAQ

Frequently Asked Questions

What is Minkowski distance?

Minkowski distance is a generalised metric that unifies several common distance measures under one formula: d(A,B) = (Σ|aᵢ−bᵢ|ᵖ)^(1/p). The parameter p controls the shape of the distance. At p=1 it equals Manhattan distance, at p=2 it equals Euclidean distance, and as p→∞ it converges to Chebyshev distance.

What is the Minkowski distance formula?

d(A, B) = (Σᵢ |aᵢ − bᵢ|ᵖ)^(1/p), where p > 0 is the order parameter and the sum runs over all dimensions. For 2D points: d = (|x₂−x₁|ᵖ + |y₂−y₁|ᵖ)^(1/p). Setting p=1 gives Manhattan, p=2 gives Euclidean, p=∞ gives Chebyshev.

What is the difference between Minkowski distance and Euclidean distance?

Euclidean distance is a special case of Minkowski distance with p=2. Minkowski is the generalisation: it covers Euclidean (p=2), Manhattan (p=1), and Chebyshev (p→∞). Setting p=2 in this calculator produces exactly the same result as our Euclidean distance calculator.

How does p affect Minkowski distance?

As p increases from 1 to ∞: the distance decreases (or stays the same); the dominant axis has more influence; the unit ball changes shape from a square (p=1, rotated 45°) to a circle (p=2) to a square aligned with the axes (p→∞). For p < 1, the formula no longer satisfies the triangle inequality and is not a true metric — but is used in sparse data analysis.

What is the Lp norm?

The Lp norm of a vector v is ‖v‖ₚ = (Σ|vᵢ|ᵖ)^(1/p). Minkowski distance is the Lp norm of the difference vector (A−B). Common norms: L1 (sum of absolute values), L2 (Euclidean length), L∞ (maximum absolute value). In machine learning, L1 and L2 regularisation use these norms to penalise model complexity.

How is Minkowski distance used in machine learning?

Minkowski distance with tunable p is used in kNN (k-nearest neighbours) as a hyperparameter. Scikit-learn's KNeighborsClassifier accepts a metric='minkowski' parameter with p=1 (Manhattan), p=2 (Euclidean, default), or any other positive value. Tuning p via cross-validation can improve classification accuracy when the optimal distance metric is unknown.

What value of p should I use?

p=2 (Euclidean) is the default and works well for continuous data in low dimensions. p=1 (Manhattan) is more robust to outliers and works better in high-dimensional spaces (less affected by the curse of dimensionality). p→∞ (Chebyshev) is useful when the worst-case axis deviation matters most. Fractional p (0 < p < 1) creates non-metric 'distances' useful for very sparse data.

How does Minkowski distance relate to Manhattan and Chebyshev?

All three are part of the same Lp family. Manhattan = Minkowski(p=1) = L1 norm. Euclidean = Minkowski(p=2) = L2 norm. Chebyshev = limit of Minkowski as p→∞ = L∞ norm. This means you can smoothly interpolate between them by varying p. Use our Manhattan, Euclidean, and Chebyshev calculators to explore each special case.