强曰为道
与天地相似,故不违。知周乎万物,而道济天下,故不过。旁行而不流,乐天知命,故不忧.
文档目录

Julia 教程 / 完整项目:构建科学计算工具包

完整项目:构建科学计算工具包

本教程的最后一站,我们将综合前面所学,从零构建一个完整的 Julia 科学计算工具包(SciKit.jl)。涵盖项目架构设计、核心算法实现、类型系统、测试、文档、CI/CD 和发布全流程。


1. 项目架构设计

1.1 模块划分

SciKit/
├── Project.toml
├── src/
│   ├── SciKit.jl              # 主模块入口
│   ├── types.jl               # 类型定义
│   ├── integration.jl         # 数值积分
│   ├── differentiation.jl     # 数值微分
│   ├── interpolation.jl       # 插值
│   ├── optimization.jl        # 优化
│   └── utils.jl               # 工具函数
├── test/
│   ├── runtests.jl
│   ├── test_integration.jl
│   ├── test_differentiation.jl
│   ├── test_interpolation.jl
│   └── test_utils.jl
├── docs/
│   ├── make.jl
│   ├── Project.toml
│   └── src/
│       ├── index.md
│       ├── getting-started.md
│       └── api.md
├── .github/workflows/
│   ├── CI.yml
│   └── Documentation.yml
├── .JuliaFormatter.toml
├── README.md
└── LICENSE

1.2 主模块文件

# src/SciKit.jl
module SciKit

using LinearAlgebra
using Statistics

# 类型定义
include("types.jl")

# 子模块
include("utils.jl")
include("integration.jl")
include("differentiation.jl")
include("interpolation.jl")
include("optimization.jl")

# 导出公共 API
export
    # 积分
    quad, simpson, gauss_legendre,
    # 微分
    central_diff, gradient, jacobian,
    # 插值
    LinearInterp, CubicSpline,
    # 优化
    gradient_descent, newton_method,
    # 类型
    QuadratureResult

# 版本信息
const VERSION = v"0.1.0"

function __init__()
    @info "SciKit.jl v$VERSION loaded"
end

end # module

2. 类型系统设计

2.1 抽象类型层次

# src/types.jl

# 抽象类型层次
abstract type AbstractIntegrator end
abstract type AbstractDifferentiator end
abstract type AbstractInterpolator{T} end
abstract type AbstractOptimizer end

# 积分结果
"""
    QuadratureResult{T}

存储数值积分结果的结构体。

# 字段
- `value::T`: 积分值
- `error::T`: 估计误差
- `n_evals::Int`: 函数求值次数
- `converged::Bool`: 是否收敛
"""
struct QuadratureResult{T<:Real}
    value::T
    error::T
    n_evals::Int
    converged::Bool
end

# 参数化插值器
"""
    LinearInterp{T}

线性插值器,支持任意实数类型。
"""
struct LinearInterp{T<:Real} <: AbstractInterpolator{T}
    xs::Vector{T}
    ys::Vector{T}
    n::Int
    
    function LinearInterp(xs::Vector{T}, ys::Vector{T}) where T
        n = length(xs)
        length(ys) == n || throw(DimensionMismatch("xs 和 ys 长度不匹配"))
        issorted(xs) || throw(ArgumentError("xs 必须排序"))
        new{T}(xs, ys, n)
    end
end

"""
    CubicSpline{T}

三次样条插值器。
"""
struct CubicSpline{T<:Real} <: AbstractInterpolator{T}
    xs::Vector{T}
    ys::Vector{T}
    coeffs::Matrix{T}  # [a, b, c, d] 系数
    n::Int
end

# 优化结果
struct OptimizationResult{T}
    x_min::Vector{T}
    f_min::T
    n_iter::Int
    converged::Bool
    message::String
end

2.2 参数化类型的好处

# Float32 数据使用 Float32 计算(节省内存,GPU 兼容)
xs32 = Float32[1.0, 2.0, 3.0, 4.0]
ys32 = Float32[1.0, 4.0, 9.0, 16.0]
interp32 = LinearInterp(xs32, ys32)  # LinearInterp{Float32}

# Float64 数据使用 Float64 计算
xs64 = Float64[1.0, 2.0, 3.0, 4.0]
ys64 = Float64[1.0, 4.0, 9.0, 16.0]
interp64 = LinearInterp(xs64, ys64)  # LinearInterp{Float64}

# BigFloat 高精度计算
xs_big = BigFloat[1, 2, 3, 4]
ys_big = BigFloat[1, 4, 9, 16]
interp_big = LinearInterp(xs_big, ys_big)  # LinearInterp{BigFloat}

3. 核心算法:数值积分

3.1 复合梯形法则

# src/integration.jl

"""
    quad(f, a, b; n=1000) -> QuadratureResult

使用复合梯形法则计算 `f` 在 `[a, b]` 上的积分。

# 示例
```julia
result = quad(sin, 0, π)
result.value  # ≈ 2.0

""" function quad(f::F, a::T, b::T; n::Int=1000) where {F, T<:Real} n > 0 || throw(ArgumentError(“n 必须为正”))

h = (b - a) / n
s = (f(a) + f(b)) / 2

for i in 1:n-1
    s += f(a + i * h)
end

value = s * h

# 误差估计(Richardson 外推)
n2 = 2n
h2 = (b - a) / n2
s2 = (f(a) + f(b)) / 2
for i in 1:n2-1
    s2 += f(a + i * h2)
end
value2 = s2 * h2

err = abs(value2 - value)

return QuadratureResult{T}(value2, err, 2n + 1, err < 1e-10)

end

统一接口

quad(f, a, b; kwargs…) = quad(f, promote(float(a), float(b))…; kwargs…)


### 3.2 Simpson 法则

```julia
"""
    simpson(f, a, b; n=1000) -> QuadratureResult

使用 Simpson 复合法则计算积分。精度高于梯形法则。
"""
function simpson(f::F, a::T, b::T; n::Int=1000) where {F, T<:Real}
    iseven(n) || (n += 1)  # Simpson 要求偶数个子区间
    
    h = (b - a) / n
    
    s = f(a) + f(b)
    for i in 1:2:n-1
        s += 4 * f(a + i * h)
    end
    for i in 2:2:n-2
        s += 2 * f(a + i * h)
    end
    
    value = s * h / 3
    
    # 误差估计
    n2 = 2n
    h2 = (b - a) / n2
    s2 = f(a) + f(b)
    for i in 1:2:n2-1
        s2 += 4 * f(a + i * h2)
    end
    for i in 2:2:n2-2
        s2 += 2 * f(a + i * h2)
    end
    value2 = s2 * h2 / 3
    
    err = abs(value2 - value) / 15  # Simpson 误差阶 O(h^4)
    
    return QuadratureResult{T}(value2, err, n + 1, err < 1e-12)
end

3.3 Gauss-Legendre 积分

# 高斯积分节点和权重(编译期常量)
const GAUSS_POINTS = Dict{Int, Vector{Float64}}(
    2 => [-0.5773502691896257, 0.5773502691896257],
    3 => [-0.7745966692414834, 0.0, 0.7745966692414834],
    4 => [-0.8611363115940526, -0.3399810435848563,
           0.3399810435848563,  0.8611363115940526],
    5 => [-0.9061798459386640, -0.5384693101056831, 0.0,
           0.5384693101056831,  0.9061798459386640],
)

const GAUSS_WEIGHTS = Dict{Int, Vector{Float64}}(
    2 => [1.0, 1.0],
    3 => [0.5555555555555556, 0.8888888888888888, 0.5555555555555556],
    4 => [0.3478548451374538, 0.6521451548625461,
          0.6521451548625461, 0.3478548451374538],
    5 => [0.2369268850561891, 0.4786286704993665, 0.5688888888888889,
          0.4786286704993665, 0.2369268850561891],
)

"""
    gauss_legendre(f, a, b; order=5)

使用 Gauss-Legendre 求积计算积分。对多项式精确。
"""
function gauss_legendre(f::F, a::T, b::T; order::Int=5) where {F, T<:Real}
    haskey(GAUSS_POINTS, order) || throw(ArgumentError("order 必须为 2-5"))
    
    points = GAUSS_POINTS[order]
    weights = GAUSS_WEIGHTS[order]
    
    # 映射到 [a, b]
    mid = (a + b) / 2
    half = (b - a) / 2
    
    s = zero(T)
    for (p, w) in zip(points, weights)
        s += w * f(mid + half * p)
    end
    
    return QuadratureResult{T}(s * half, zero(T), order, true)
end

4. 核心算法:数值微分

# src/differentiation.jl

"""
    central_diff(f, x; h=1e-8)

使用中心差分计算导数。
"""
function central_diff(f::F, x::T; h::Real=1e-8) where {F, T<:Real}
    return (f(x + h) - f(x - h)) / (2h)
end

"""
    gradient(f, x; h=1e-8)

计算标量函数的梯度向量。
"""
function gradient(f::F, x::Vector{T}; h::Real=1e-8) where {F, T<:Real}
    n = length(x)
    grad = zeros(T, n)
    fx = f(x)
    
    for i in 1:n
        x_plus = copy(x)
        x_minus = copy(x)
        x_plus[i] += h
        x_minus[i] -= h
        grad[i] = (f(x_plus) - f(x_minus)) / (2h)
    end
    
    return grad
end

"""
    jacobian(f, x; h=1e-8)

计算向量函数的 Jacobian 矩阵。
"""
function jacobian(f::F, x::Vector{T}; h::Real=1e-8) where {F, T<:Real}
    n = length(x)
    fx = f(x)
    m = length(fx)
    J = zeros(T, m, n)
    
    for i in 1:n
        x_plus = copy(x)
        x_minus = copy(x)
        x_plus[i] += h
        x_minus[i] -= h
        J[:, i] = (f(x_plus) - f(x_minus)) / (2h)
    end
    
    return J
end

5. 核心算法:插值

# src/interpolation.jl

"""
    (interp::LinearInterp)(x)

线性插值求值。
"""
function (interp::LinearInterp{T})(x::Real) where T
    x = T(x)
    
    # 边界检查
    (interp.xs[1] <= x <= interp.xs[end]) ||
        throw(DomainError(x, "超出插值范围"))
    
    # 二分查找
    lo, hi = 1, interp.n
    while hi - lo > 1
        mid = (lo + hi) ÷ 2
        if interp.xs[mid] <= x
            lo = mid
        else
            hi = mid
        end
    end
    
    # 线性插值
    t = (x - interp.xs[lo]) / (interp.xs[hi] - interp.xs[lo])
    return interp.ys[lo] + t * (interp.ys[hi] - interp.ys[lo])
end

"""
    CubicSpline(xs, ys)

构造三次样条插值器(自然边界条件)。
"""
function CubicSpline(xs::Vector{T}, ys::Vector{T}) where T
    n = length(xs)
    n >= 3 || throw(ArgumentError("至少需要 3 个数据点"))
    
    # 计算二阶导数(三对角系统)
    h = diff(xs)
    alpha = zeros(T, n)
    for i in 2:n-1
        alpha[i] = 3/h[i] * (ys[i+1] - ys[i]) - 3/h[i-1] * (ys[i] - ys[i-1])
    end
    
    # Thomas 算法求解三对角系统
    l = ones(T, n)
    mu = zeros(T, n)
    z = zeros(T, n)
    
    for i in 2:n-1
        l[i] = 2(xs[i+1] - xs[i-1]) - h[i-1] * mu[i-1]
        mu[i] = h[i] / l[i]
        z[i] = (alpha[i] - h[i-1] * z[i-1]) / l[i]
    end
    
    # 反向求解系数
    c = zeros(T, n)
    b = zeros(T, n-1)
    d = zeros(T, n-1)
    
    for j in n-1:-1:1
        c[j] = z[j] - mu[j] * c[j+1]
        b[j] = (ys[j+1] - ys[j]) / h[j] - h[j] * (c[j+1] + 2c[j]) / 3
        d[j] = (c[j+1] - c[j]) / (3h[j])
    end
    
    # 存储系数 [a, b, c, d]
    coeffs = zeros(T, n-1, 4)
    coeffs[:, 1] = ys[1:end-1]  # a
    coeffs[:, 2] = b             # b
    coeffs[:, 3] = c[1:end-1]   # c
    coeffs[:, 4] = d             # d
    
    return CubicSpline{T}(xs, ys, coeffs, n-1)
end

function (cs::CubicSpline{T})(x::Real) where T
    x = T(x)
    
    # 二分查找区间
    lo, hi = 1, cs.n
    while hi - lo > 1
        mid = (lo + hi) ÷ 2
        if cs.xs[mid] <= x
            lo = mid
        else
            hi = mid
        end
    end
    
    # 计算样条值
    dx = x - cs.xs[lo]
    c = cs.coeffs[lo, :]
    return c[1] + c[2]*dx + c[3]*dx^2 + c[4]*dx^3
end

6. 核心算法:优化

# src/optimization.jl

"""
    gradient_descent(f, x0; lr=0.01, max_iter=1000, tol=1e-6)

梯度下降优化。
"""
function gradient_descent(f::F, x0::Vector{T};
                          lr::Real=0.01, max_iter::Int=1000,
                          tol::Real=1e-6) where {F, T<:Real}
    x = copy(x0)
    fx = f(x)
    
    for iter in 1:max_iter
        g = gradient(f, x; h=1e-8)
        x_new = x .- lr .* g
        fx_new = f(x_new)
        
        if norm(x_new - x) < tol
            return OptimizationResult{T}(x_new, fx_new, iter, true, "收敛")
        end
        
        x = x_new
        fx = fx_new
    end
    
    return OptimizationResult{T}(x, fx, max_iter, false, "达到最大迭代次数")
end

"""
    newton_method(f, x0; max_iter=100, tol=1e-8)

牛顿法优化(使用数值 Hessian)。
"""
function newton_method(f::F, x0::Vector{T};
                       max_iter::Int=100,
                       tol::Real=1e-8) where {F, T<:Real}
    x = copy(x0)
    n = length(x)
    
    for iter in 1:max_iter
        g = gradient(f, x; h=1e-6)
        
        if norm(g) < tol
            return OptimizationResult{T}(x, f(x), iter, true, "梯度接近零")
        end
        
        # 数值 Hessian
        H = zeros(T, n, n)
        h = 1e-5
        for i in 1:n
            for j in i:n
                xpp = copy(x); xpp[i] += h; xpp[j] += h
                xpm = copy(x); xpm[i] += h; xpm[j] -= h
                xmp = copy(x); xmp[i] -= h; xmp[j] += h
                xmm = copy(x); xmm[i] -= h; xmm[j] -= h
                H[i,j] = (f(xpp) - f(xpm) - f(xmp) + f(xmm)) / (4h^2)
                H[j,i] = H[i,j]
            end
        end
        
        # 加正则化确保正定
        H += 1e-6 * I
        
        x = x - H \ g
    end
    
    return OptimizationResult{T}(x, f(x), max_iter, false, "达到最大迭代次数")
end

7. 工具函数

# src/utils.jl

"""
    linspace(a, b, n)

生成等间距向量,类似 MATLAB 的 linspace。
"""
linspace(a, b, n) = range(a, b; length=n)

"""
    numdiff_table(f, x; steps)

打印数值微分的收敛表(Richardson 外推)。
"""
function numdiff_table(f, x; steps=[1.0/2^k for k in 0:10])
    exact = central_diff(f, x; h=1e-12)
    
    println("h\t\t估计值\t\t误差\t\t收敛比")
    println("-"^60)
    
    prev_est = nothing
    for h in steps
        est = central_diff(f, x; h=h)
        err = abs(est - exact)
        
        ratio = prev_est !== nothing ? abs(prev_est - exact) / max(err, 1e-16) : NaN
        
        @printf "%.2e\t%.8f\t%.2e\t%.2f\n" h est err ratio
        prev_est = est
    end
end

8. 测试策略

8.1 单元测试

# test/test_integration.jl
using Test
using SciKit

@testset "数值积分" begin
    @testset "quad — 梯形法则" begin
        result = quad(sin, 0, π)
        @test result.value  2.0 atol=1e-6
        @test result.converged
        
        result = quad(x -> x^2, 0, 1)
        @test result.value  1/3 atol=1e-6
        
        # 零长度区间
        result = quad(sin, 1, 1)
        @test result.value  0.0
    end
    
    @testset "simpson — Simpson 法则" begin
        result = simpson(sin, 0, π)
        @test result.value  2.0 atol=1e-10
        
        # 多项式精确
        result = simpson(x -> x^3, 0, 1)
        @test result.value  0.25 atol=1e-12
    end
    
    @testset "gauss_legendre — 高斯积分" begin
        for order in 2:5
            result = gauss_legendre(x -> x^(2*order-2), -1, 1; order=order)
            @test result.value  2/(2*order-1) atol=1e-10
        end
    end
    
    @testset "参数验证" begin
        @test_throws ArgumentError quad(sin, 0, 1; n=0)
    end
end

8.2 属性测试

# test/test_properties.jl
using Test
using SciKit

@testset "属性测试" begin
    @testset "线性性质" begin
        f(x) = 3x + 2
        result = quad(f, 0, 5)
        @test result.value  42.5  # ∫₀⁵ (3x+2) dx = 3·25/2 + 10 = 47.5
    end
    
    @testset "插值边界条件" begin
        xs = [0.0, 1.0, 2.0, 3.0]
        ys = [0.0, 1.0, 4.0, 9.0]
        interp = LinearInterp(xs, ys)
        
        # 插值器应通过所有数据点
        for (x, y) in zip(xs, ys)
            @test interp(x)  y
        end
    end
    
    @testset "微分近似精度" begin
        # 中心差分误差应为 O(h²)
        f(x) = sin(x)
        exact = cos(1.0)
        
        err1 = abs(central_diff(f, 1.0; h=1e-2) - exact)
        err2 = abs(central_diff(f, 1.0; h=1e-4) - exact)
        
        # err1/err2 ≈ (1e-2/1e-4)² = 10000
        @test err1 / err2 > 100
    end
end

8.3 集成测试

# test/test_integration_e2e.jl
@testset "端到端测试" begin
    # 综合使用积分+插值+微分
    xs = range(0, 2π, length=100)
    ys = sin.(xs)
    
    # 插值
    interp = LinearInterp(collect(xs), collect(ys))
    
    # 积分插值后的函数
    result = quad(interp, 0, 2π; n=10000)
    @test result.value  0.0 atol=1e-3  # sin 的完整周期积分为 0
end

9. 文档编写

9.1 Documenter.jl 配置

# docs/make.jl
using Documenter
using SciKit

makedocs(;
    modules=[SciKit],
    sitename="SciKit.jl",
    format=Documenter.HTML(;
        prettyurls=get(ENV, "CI", nothing) == "true",
        canonical="https://user.github.io/SciKit.jl",
        assets=["assets/custom.css"],
    ),
    pages=[
        "首页" => "index.md",
        "快速入门" => "getting-started.md",
        "教程" => [
            "数值积分" => "tutorials/integration.md",
            "数值微分" => "tutorials/differentiation.md",
            "插值" => "tutorials/interpolation.md",
        ],
        "API 参考" => "api.md",
    ],
)

deploydocs(;
    repo="github.com/user/SciKit.jl",
    devbranch="main",
)

9.2 文档字符串示例

"""
    quad(f, a::Real, b::Real; n::Int=1000) -> QuadratureResult

使用复合梯形法则计算函数 `f` 在区间 `[a, b]` 上的定积分。

# 参数
- `f`: 被积函数,接受单个实数参数
- `a`: 积分下限
- `b`: 积分上限
- `n`: 子区间数量(默认 1000)

# 返回值
返回 `QuadratureResult`,包含:
- `value`: 积分近似值
- `error`: 误差估计
- `n_evals`: 函数求值次数
- `converged`: 是否收敛

# 示例
```julia
# 计算 ∫₀π sin(x) dx = 2
result = quad(sin, 0, π)
result.value  # ≈ 2.0

# 自定义函数
f(x) = exp(-x^2)
result = quad(f, -5, 5)  # 近似 √π

另见

simpson, gauss_legendre """ function quad end


---

## 10. 性能基准测试

```julia
# benchmark/benchmarks.jl
using BenchmarkTools
using SciKit

suite = BenchmarkGroup()

# 积分基准
suite["integration"] = BenchmarkGroup()
suite["integration"]["quad_sin"] = @benchmarkable quad(sin, 0.0, π)
suite["integration"]["simpson_sin"] = @benchmarkable simpson(sin, 0.0, π)

# 微分基准
suite["differentiation"] = BenchmarkGroup()
suite["differentiation"]["central_diff"] = @benchmarkable central_diff(sin, 1.0)
suite["differentiation"]["gradient_10d"] = @benchmarkable gradient(
    x -> sum(x.^2), ones(10)
)

# 插值基准
suite["interpolation"] = BenchmarkGroup()
xs = collect(range(0, 1, length=1000))
ys = sin.(xs)
interp = LinearInterp(xs, ys)
suite["interpolation"]["linear_eval"] = @benchmarkable $interp(0.5)

# 运行并输出结果
results = run(suite, verbose=true)

11. CI/CD 流水线

# .github/workflows/CI.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    name: Julia ${{ matrix.version }} - ${{ matrix.os }}
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        version: ['1.10', '1.11']
        os: [ubuntu-latest, windows-latest, macOS-latest]
    steps:
      - uses: actions/checkout@v4
      - uses: julia-actions/setup-julia@v2
        with:
          version: ${{ matrix.version }}
      - uses: julia-actions/cache@v2
      - uses: julia-actions/julia-buildpkg@v1
      - uses: julia-actions/julia-runtest@v1
      - uses: julia-actions/julia-processcoverage@v1
      - uses: codecov/codecov-action@v4
        with:
          files: lcov.info

  docs:
    name: Documentation
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: julia-actions/setup-julia@v2
      - uses: julia-actions/cache@v2
      - run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
      - run: julia --project=docs/ docs/make.jl
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }}

12. 发布到 Julia Registry

# 1. 确保测试通过
julia --project -e 'using Pkg; Pkg.test()'

# 2. 更新版本号
# Project.toml: version = "0.1.0"

# 3. 提交并推送
git add -A
git commit -m "SciKit.jl v0.1.0"
git tag v0.1.0
git push --tags

# 4. 注册
# 在 GitHub issue 中 @JuliaRegistrator register

13. 项目总结

技术栈总结

层次技术说明
语言Julia 1.10+多重派发、参数化类型
类型系统Abstract + Parametric灵活且高效
测试Test.jl + BenchmarkTools单元/集成/属性测试
文档Documenter.jlAPI 文档 + 教程
CI/CDGitHub Actions自动测试、文档发布
格式化JuliaFormatter统一代码风格
注册JuliaRegistrator发布到 General Registry

设计原则

原则实践
类型安全参数化类型,避免 Any
性能类型稳定,避免装箱
可组合统一接口(callable struct)
可测试纯函数优先
文档完善每个公共 API 有 docstring

14. 展望

可能的扩展方向

方向内容
GPU 加速使用 CUDA.jl 加速大规模计算
自动微分集成 ForwardDiff.jl / Zygote.jl
并行计算使用 Threads / Distributed
更多算法ODE 求解器、FFT、稀疏矩阵
绑定通过 ccall 调用 LAPACK/FFTW

业务场景

场景一:科研团队内部库

研究团队将常用数值方法封装为 Julia 包,添加测试和文档。通过内部 Registry 分发,确保团队成员使用一致的算法实现。

场景二:教学工具

将数值分析课程的算法实现为交互式 Julia 包,学生可以调用函数并查看结果,理解算法原理。

场景三:工程计算基础库

作为大型工程项目的基础库,提供经过严格测试的数值方法。通过 CI/CD 确保每次提交不会破坏已有功能。


总结

阶段关键任务
设计模块划分、类型层次、API 设计
实现核心算法、类型稳定、文档字符串
测试单元测试、属性测试、集成测试
文档Documenter.jl、教程、API 参考
工程CI/CD、格式化、覆盖率
发布版本管理、注册到 Registry

扩展阅读