littlesnitch-linux

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Little Snitch for Linux — eBPF Network Monitor

Linux版Little Snitch — eBPF网络监控

Skill by ara.so — Daily 2026 Skills collection.
Little Snitch for Linux is an open-source eBPF-based network monitoring and blocking toolkit written in Rust. It attaches eBPF programs to the Linux kernel to intercept network connections, then shares data between kernel and user space via eBPF maps. The open-source portion includes eBPF programs, shared types, and a demo runner; the full product from Objective Development includes additional proprietary UI and rule-engine components.

ara.so 提供的Skill — 2026年度Skill合集。
Linux版Little Snitch是一款用Rust编写的、基于eBPF的开源网络监控与拦截工具集。它将eBPF程序挂载到Linux内核以拦截网络连接,随后通过eBPF映射在内核和用户空间之间共享数据。开源部分包含eBPF程序、共享类型和演示运行器;Objective Development的完整产品还包含额外的专有UI和规则引擎组件。

Architecture Overview

架构概览

┌─────────────────────────────────┐
│        demo-runner (user space) │
│  - loads eBPF programs          │
│  - populates eBPF maps          │
│  - reads events from kernel     │
└────────────┬────────────────────┘
             │  eBPF maps (shared memory)
┌────────────▼────────────────────┐
│        ebpf crate (kernel)      │
│  - eBPF programs (TC, LSM, etc) │
│  - intercepts network syscalls  │
└─────────────────────────────────┘
┌────────────▼────────────────────┐
│        common crate             │
│  - shared types & functions     │
│  - used by both kernel & user   │
└─────────────────────────────────┘
Crates:
  • ebpf/
    — eBPF kernel-space programs (compiled to BPF bytecode)
  • common/
    — Shared types between kernel and user space
  • demo-runner/
    — User-space loader and event consumer
  • webroot/
    — JavaScript web UI

┌─────────────────────────────────┐
│        demo-runner (user space) │
│  - loads eBPF programs          │
│  - populates eBPF maps          │
│  - reads events from kernel     │
└────────────┬────────────────────┘
             │  eBPF maps (shared memory)
┌────────────▼────────────────────┐
│        ebpf crate (kernel)      │
│  - eBPF programs (TC, LSM, etc) │
│  - intercepts network syscalls  │
└─────────────────────────────────┘
┌────────────▼────────────────────┐
│        common crate             │
│  - shared types & functions     │
│  - used by both kernel & user   │
└─────────────────────────────────┘
Crates:
  • ebpf/
    — eBPF内核空间程序(编译为BPF字节码)
  • common/
    — 内核与用户空间之间的共享类型
  • demo-runner/
    — 用户空间加载器和事件消费者
  • webroot/
    — JavaScript网页UI

Prerequisites

前置要求

Rust Toolchains

Rust工具链

bash
undefined
bash
undefined

Install stable toolchain

Install stable toolchain

rustup toolchain install stable
rustup toolchain install stable

Install nightly with rust-src (required for eBPF compilation)

Install nightly with rust-src (required for eBPF compilation)

rustup toolchain install nightly --component rust-src
undefined
rustup toolchain install nightly --component rust-src
undefined

System Dependencies

系统依赖

bash
undefined
bash
undefined

Install bpf-linker

Install bpf-linker

cargo install bpf-linker
cargo install bpf-linker

Install clang (required for eBPF compilation)

Install clang (required for eBPF compilation)

Ubuntu/Debian:

Ubuntu/Debian:

sudo apt install clang
sudo apt install clang

Fedora/RHEL:

Fedora/RHEL:

sudo dnf install clang
sudo dnf install clang

Arch Linux:

Arch Linux:

sudo pacman -S clang
undefined
sudo pacman -S clang
undefined

Kernel Requirements

内核要求

  • Linux kernel 5.15+ (for BTF and CO-RE support)
  • eBPF enabled in kernel config (
    CONFIG_BPF=y
    ,
    CONFIG_BPF_SYSCALL=y
    )
  • CAP_BPF
    or root privileges to load eBPF programs

  • Linux内核 5.15+(支持BTF和CO-RE)
  • 内核配置中已启用eBPF(
    CONFIG_BPF=y
    CONFIG_BPF_SYSCALL=y
  • 拥有
    CAP_BPF
    权限或root权限才可加载eBPF程序

Build & Run

构建与运行

bash
undefined
bash
undefined

Clone the repository

Clone the repository

git clone https://github.com/obdev/littlesnitch-linux cd littlesnitch-linux
git clone https://github.com/obdev/littlesnitch-linux cd littlesnitch-linux

Build everything (eBPF programs are auto-built via build scripts)

Build everything (eBPF programs are auto-built via build scripts)

cargo build --release
cargo build --release

Run the demo runner (requires root or CAP_BPF)

Run the demo runner (requires root or CAP_BPF)

sudo cargo run --release
sudo cargo run --release

Check without building

Check without building

cargo check

> **Note:** Cargo build scripts automatically compile the eBPF programs and embed them in the binary — no manual eBPF compilation step needed.

---
cargo check

> **注意:** Cargo构建脚本会自动编译eBPF程序并嵌入到二进制文件中,无需手动执行eBPF编译步骤。

---

Blocklist Configuration

拦截列表配置

The demo runner loads two blocklist files at startup:
演示运行器启动时会加载两个拦截列表文件:

blocked_hosts.txt

blocked_hosts.txt

One IP address or hostname per line:
93.184.216.34
203.0.113.0
198.51.100.1
每行填写一个IP地址或主机名:
93.184.216.34
203.0.113.0
198.51.100.1

blocked_domains.txt

blocked_domains.txt

One domain suffix per line (blocks domain and all subdomains):
example.com
ads.doubleclick.net
tracking.example.org
Place these files in the working directory before running:
bash
echo "93.184.216.34" > blocked_hosts.txt
echo "example.com" > blocked_domains.txt
sudo cargo run --release

每行填写一个域名后缀(会拦截该域名及所有子域名):
example.com
ads.doubleclick.net
tracking.example.org
运行前请将这些文件放在工作目录下:
bash
echo "93.184.216.34" > blocked_hosts.txt
echo "example.com" > blocked_domains.txt
sudo cargo run --release

Common Crate — Shared Types

Common Crate — 共享类型

The
common
crate defines types shared between kernel eBPF code and user-space. When extending the project, add new shared types here.
rust
// common/src/lib.rs — example of how shared types are structured
#![no_std]

// Connection event sent from kernel to user space via perf/ring buffer
#[repr(C)]
#[derive(Clone, Copy)]
pub struct ConnectionEvent {
    pub pid: u32,
    pub uid: u32,
    pub src_addr: u32,   // IPv4 in network byte order
    pub dst_addr: u32,
    pub src_port: u16,
    pub dst_port: u16,
    pub protocol: u8,
    pub action: u8,      // 0 = allow, 1 = block
}

// Key type for the blocked hosts map
#[repr(C)]
#[derive(Clone, Copy)]
pub struct IpKey {
    pub addr: u32,
}

common
crate定义了内核eBPF代码和用户空间之间共享的类型。扩展项目时,请在此处添加新的共享类型。
rust
// common/src/lib.rs — example of how shared types are structured
#![no_std]

// Connection event sent from kernel to user space via perf/ring buffer
#[repr(C)]
#[derive(Clone, Copy)]
pub struct ConnectionEvent {
    pub pid: u32,
    pub uid: u32,
    pub src_addr: u32,   // IPv4 in network byte order
    pub dst_addr: u32,
    pub src_port: u16,
    pub dst_port: u16,
    pub protocol: u8,
    pub action: u8,      // 0 = allow, 1 = block
}

// Key type for the blocked hosts map
#[repr(C)]
#[derive(Clone, Copy)]
pub struct IpKey {
    pub addr: u32,
}

eBPF Crate — Kernel Programs

eBPF Crate — 内核程序

eBPF programs live in
ebpf/src/
and are compiled to BPF bytecode using the nightly toolchain.
rust
// ebpf/src/main.rs — example TC (Traffic Control) eBPF program structure
#![no_std]
#![no_main]

use aya_ebpf::{
    macros::classifier,
    programs::TcContext,
    maps::HashMap,
};
use aya_ebpf::bindings::TC_ACT_SHOT;
use aya_ebpf::bindings::TC_ACT_OK;
use common::IpKey;

// Map shared with user space — populated by demo-runner
#[map]
static BLOCKED_HOSTS: HashMap<IpKey, u8> = HashMap::with_max_entries(65536, 0);

#[classifier]
pub fn egress_filter(ctx: TcContext) -> i32 {
    match try_egress_filter(ctx) {
        Ok(action) => action,
        Err(_) => TC_ACT_OK,
    }
}

fn try_egress_filter(ctx: TcContext) -> Result<i32, ()> {
    // Extract destination IP from packet headers
    let dst_addr = /* parse from ctx */ 0u32;
    let key = IpKey { addr: dst_addr };

    if unsafe { BLOCKED_HOSTS.get(&key) }.is_some() {
        return Ok(TC_ACT_SHOT); // Drop the packet
    }

    Ok(TC_ACT_OK)
}

eBPF程序存放在
ebpf/src/
目录下,使用nightly工具链编译为BPF字节码。
rust
// ebpf/src/main.rs — example TC (Traffic Control) eBPF program structure
#![no_std]
#![no_main]

use aya_ebpf::{
    macros::classifier,
    programs::TcContext,
    maps::HashMap,
};
use aya_ebpf::bindings::TC_ACT_SHOT;
use aya_ebpf::bindings::TC_ACT_OK;
use common::IpKey;

// Map shared with user space — populated by demo-runner
#[map]
static BLOCKED_HOSTS: HashMap<IpKey, u8> = HashMap::with_max_entries(65536, 0);

#[classifier]
pub fn egress_filter(ctx: TcContext) -> i32 {
    match try_egress_filter(ctx) {
        Ok(action) => action,
        Err(_) => TC_ACT_OK,
    }
}

fn try_egress_filter(ctx: TcContext) -> Result<i32, ()> {
    // Extract destination IP from packet headers
    let dst_addr = /* parse from ctx */ 0u32;
    let key = IpKey { addr: dst_addr };

    if unsafe { BLOCKED_HOSTS.get(&key) }.is_some() {
        return Ok(TC_ACT_SHOT); // Drop the packet
    }

    Ok(TC_ACT_OK)
}

Demo Runner — User Space Loader

演示运行器 — 用户空间加载器

The demo runner uses Aya to load eBPF programs and interact with maps.
rust
// demo-runner/src/main.rs — loading eBPF and populating maps
use aya::{Bpf, maps::HashMap};
use aya::programs::{tc, SchedClassifier, TcAttachType};
use std::net::Ipv4Addr;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Load the compiled eBPF object (embedded at build time)
    let mut bpf = Bpf::load(aya::include_loaded_bytes!("../../target/bpfel-unknown-none/release/ebpf"))?;

    // Attach TC classifier to network interface
    let iface = "eth0";
    tc::qdisc_add_clsact(iface)?;

    let program: &mut SchedClassifier = bpf
        .program_mut("egress_filter")
        .unwrap()
        .try_into()?;
    program.load()?;
    program.attach(iface, TcAttachType::Egress)?;

    // Populate blocked hosts map from file
    let mut blocked_hosts: HashMap<_, u32, u8> =
        HashMap::try_from(bpf.map_mut("BLOCKED_HOSTS").unwrap())?;

    let hosts = std::fs::read_to_string("blocked_hosts.txt")?;
    for line in hosts.lines() {
        let line = line.trim();
        if line.is_empty() || line.starts_with('#') { continue; }
        if let Ok(addr) = line.parse::<Ipv4Addr>() {
            let ip_u32 = u32::from(addr).to_be();
            blocked_hosts.insert(ip_u32, 1, 0)?;
            println!("Blocked host: {}", line);
        }
    }

    println!("eBPF programs loaded. Monitoring traffic...");

    // Keep running, handle Ctrl+C
    tokio::signal::ctrl_c().await?;
    println!("Shutting down.");
    Ok(())
}

演示运行器使用Aya加载eBPF程序并与映射交互。
rust
// demo-runner/src/main.rs — loading eBPF and populating maps
use aya::{Bpf, maps::HashMap};
use aya::programs::{tc, SchedClassifier, TcAttachType};
use std::net::Ipv4Addr;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Load the compiled eBPF object (embedded at build time)
    let mut bpf = Bpf::load(aya::include_loaded_bytes!("../../target/bpfel-unknown-none/release/ebpf"))?;

    // Attach TC classifier to network interface
    let iface = "eth0";
    tc::qdisc_add_clsact(iface)?;

    let program: &mut SchedClassifier = bpf
        .program_mut("egress_filter")
        .unwrap()
        .try_into()?;
    program.load()?;
    program.attach(iface, TcAttachType::Egress)?;

    // Populate blocked hosts map from file
    let mut blocked_hosts: HashMap<_, u32, u8> =
        HashMap::try_from(bpf.map_mut("BLOCKED_HOSTS").unwrap())?;

    let hosts = std::fs::read_to_string("blocked_hosts.txt")?;
    for line in hosts.lines() {
        let line = line.trim();
        if line.is_empty() || line.starts_with('#') { continue; }
        if let Ok(addr) = line.parse::<Ipv4Addr>() {
            let ip_u32 = u32::from(addr).to_be();
            blocked_hosts.insert(ip_u32, 1, 0)?;
            println!("Blocked host: {}", line);
        }
    }

    println!("eBPF programs loaded. Monitoring traffic...");

    // Keep running, handle Ctrl+C
    tokio::signal::ctrl_c().await?;
    println!("Shutting down.");
    Ok(())
}

Adding a New Blocked Domain

添加新的拦截域名

rust
// Pattern: populate domain blocklist map in demo-runner
use aya::maps::HashMap;

fn load_blocked_domains(
    bpf: &mut aya::Bpf,
    path: &str,
) -> anyhow::Result<()> {
    let mut map: HashMap<_, [u8; 256], u8> =
        HashMap::try_from(bpf.map_mut("BLOCKED_DOMAINS").unwrap())?;

    let content = std::fs::read_to_string(path)?;
    for domain in content.lines() {
        let domain = domain.trim();
        if domain.is_empty() { continue; }

        let mut key = [0u8; 256];
        let bytes = domain.as_bytes();
        key[..bytes.len()].copy_from_slice(bytes);
        map.insert(key, 1, 0)?;
    }
    Ok(())
}

rust
// Pattern: populate domain blocklist map in demo-runner
use aya::maps::HashMap;

fn load_blocked_domains(
    bpf: &mut aya::Bpf,
    path: &str,
) -> anyhow::Result<()> {
    let mut map: HashMap<_, [u8; 256], u8> =
        HashMap::try_from(bpf.map_mut("BLOCKED_DOMAINS").unwrap())?;

    let content = std::fs::read_to_string(path)?;
    for domain in content.lines() {
        let domain = domain.trim();
        if domain.is_empty() { continue; }

        let mut key = [0u8; 256];
        let bytes = domain.as_bytes();
        key[..bytes.len()].copy_from_slice(bytes);
        map.insert(key, 1, 0)?;
    }
    Ok(())
}

Reading Events from Kernel

从内核读取事件

rust
// Pattern: consume connection events from ring buffer
use aya::maps::RingBuf;
use aya::util::online_cpus;
use common::ConnectionEvent;
use tokio::io::unix::AsyncFd;

async fn read_events(bpf: &mut aya::Bpf) -> anyhow::Result<()> {
    let ring_buf = RingBuf::try_from(bpf.map_mut("EVENTS").unwrap())?;
    let mut async_fd = AsyncFd::new(ring_buf)?;

    loop {
        let mut guard = async_fd.readable_mut().await?;
        let ring_buf = guard.get_inner_mut();

        while let Some(item) = ring_buf.next() {
            let event: &ConnectionEvent = unsafe {
                &*(item.as_ptr() as *const ConnectionEvent)
            };
            println!(
                "pid={} dst={}:{} action={}",
                event.pid,
                Ipv4Addr::from(u32::from_be(event.dst_addr)),
                u16::from_be(event.dst_port),
                if event.action == 1 { "BLOCKED" } else { "ALLOWED" }
            );
        }
        guard.clear_ready();
    }
}

rust
// Pattern: consume connection events from ring buffer
use aya::maps::RingBuf;
use aya::util::online_cpus;
use common::ConnectionEvent;
use tokio::io::unix::AsyncFd;

async fn read_events(bpf: &mut aya::Bpf) -> anyhow::Result<()> {
    let ring_buf = RingBuf::try_from(bpf.map_mut("EVENTS").unwrap())?;
    let mut async_fd = AsyncFd::new(ring_buf)?;

    loop {
        let mut guard = async_fd.readable_mut().await?;
        let ring_buf = guard.get_inner_mut();

        while let Some(item) = ring_buf.next() {
            let event: &ConnectionEvent = unsafe {
                &*(item.as_ptr() as *const ConnectionEvent)
            };
            println!(
                "pid={} dst={}:{} action={}",
                event.pid,
                Ipv4Addr::from(u32::from_be(event.dst_addr)),
                u16::from_be(event.dst_port),
                if event.action == 1 { "BLOCKED" } else { "ALLOWED" }
            );
        }
        guard.clear_ready();
    }
}

Cargo.toml Structure

Cargo.toml结构

toml
undefined
toml
undefined

demo-runner/Cargo.toml

demo-runner/Cargo.toml

[package] name = "demo-runner" version = "0.1.0" edition = "2021"
[dependencies] aya = { version = "0.12", features = ["async_tokio"] } aya-log = "0.2" common = { path = "../common" } anyhow = "1" tokio = { version = "1", features = ["full"] } log = "0.4" env_logger = "0.10"
[build-dependencies] aya-build = "0.1"

```toml
[package] name = "demo-runner" version = "0.1.0" edition = "2021"
[dependencies] aya = { version = "0.12", features = ["async_tokio"] } aya-log = "0.2" common = { path = "../common" } anyhow = "1" tokio = { version = "1", features = ["full"] } log = "0.4" env_logger = "0.10"
[build-dependencies] aya-build = "0.1"

```toml

ebpf/Cargo.toml

ebpf/Cargo.toml

[package] name = "ebpf" version = "0.1.0" edition = "2021"
[dependencies] aya-ebpf = "0.1" aya-log-ebpf = "0.1" common = { path = "../common" }
[[bin]] name = "ebpf" path = "src/main.rs"

---
[package] name = "ebpf" version = "0.1.0" edition = "2021"
[dependencies] aya-ebpf = "0.1" aya-log-ebpf = "0.1" common = { path = "../common" }
[[bin]] name = "ebpf" path = "src/main.rs"

---

Troubleshooting

问题排查

"Operation not permitted" when loading eBPF

加载eBPF时提示"Operation not permitted"

bash
undefined
bash
undefined

Run with sudo or grant capabilities

Run with sudo or grant capabilities

sudo cargo run --release
sudo cargo run --release

Or grant cap_bpf to the binary after build

Or grant cap_bpf to the binary after build

sudo setcap cap_bpf,cap_net_admin+eip target/release/demo-runner ./target/release/demo-runner
undefined
sudo setcap cap_bpf,cap_net_admin+eip target/release/demo-runner ./target/release/demo-runner
undefined

Build fails:
bpf-linker not found

构建失败:
bpf-linker not found

bash
cargo install bpf-linker
bash
cargo install bpf-linker

If it fails, ensure LLVM is installed:

If it fails, ensure LLVM is installed:

sudo apt install llvm-dev libclang-dev # Debian/Ubuntu
undefined
sudo apt install llvm-dev libclang-dev # Debian/Ubuntu
undefined

eBPF verifier rejects program

eBPF验证器拒绝程序

  • Reduce map sizes or loop bounds
  • Ensure all memory accesses are bounds-checked
  • Check kernel version supports the helpers you're using:
    bash
    uname -r  # Should be 5.15+
  • 减小映射大小或循环边界
  • 确保所有内存访问都做了边界检查
  • 检查内核版本是否支持你使用的辅助函数:
    bash
    uname -r  # Should be 5.15+

Map not found error

映射未找到错误

bash
undefined
bash
undefined

Verify eBPF object was built and embedded correctly

Verify eBPF object was built and embedded correctly

cargo build --release 2>&1 | grep -i ebpf
cargo build --release 2>&1 | grep -i ebpf

The build script in demo-runner/build.rs handles this automatically

The build script in demo-runner/build.rs handles this automatically

undefined
undefined

blocked_hosts.txt
not found

未找到
blocked_hosts.txt

bash
undefined
bash
undefined

Run from repo root, or specify path explicitly

Run from repo root, or specify path explicitly

touch blocked_hosts.txt blocked_domains.txt sudo cargo run --release

---
touch blocked_hosts.txt blocked_domains.txt sudo cargo run --release

---

License

许可证

All code in this repository is licensed under GPL-2.0. Contributions submitted to this project are licensed under the same terms.
本仓库的所有代码采用GPL-2.0许可证开源。提交到本项目的贡献也将采用相同的许可条款。