sqlite-to-fast-sql

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

SQLite to Fast SQL Migration

SQLite 到 Fast SQL 迁移

Migrate bridge-based SQLite or SQL plugins to
@capgo/capacitor-fast-sql
.
将基于桥接的SQLite或SQL插件迁移到
@capgo/capacitor-fast-sql

When to Use This Skill

何时使用本技能

  • User wants to replace an existing SQLite or SQL plugin
  • User needs better performance for large result sets or sync-style writes
  • User wants encrypted local storage, transactions, batch writes, or BLOB support
  • User wants a key-value wrapper backed by Fast SQL instead of a legacy storage plugin
  • 用户需要替换现有SQLite或SQL插件
  • 用户需要为大数据集或同步式写入提升性能
  • 用户需要加密本地存储、事务、批量写入或BLOB支持
  • 用户需要基于Fast SQL的键值封装,而非传统存储插件

Live Project Snapshot

项目实时快照

Detected SQL-related packages: !
node -e "const fs=require('fs');if(!fs.existsSync('package.json'))process.exit(0);const pkg=JSON.parse(fs.readFileSync('package.json','utf8'));const needles=['sqlite','sqlcipher','typeorm','watermelondb','pouchdb','@capacitor-community/sqlite','@capawesome-team/capacitor-sqlite','@capgo/capacitor-fast-sql'];const out=[];for(const section of ['dependencies','devDependencies']){for(const [name,version] of Object.entries(pkg[section]||{})){if(needles.some((needle)=>name.includes(needle)))out.push(section+'.'+name+'='+version)}}console.log(out.sort().join('\n'))"
检测到的SQL相关包: !
node -e "const fs=require('fs');if(!fs.existsSync('package.json'))process.exit(0);const pkg=JSON.parse(fs.readFileSync('package.json','utf8'));const needles=['sqlite','sqlcipher','typeorm','watermelondb','pouchdb','@capacitor-community/sqlite','@capawesome-team/capacitor-sqlite','@capgo/capacitor-fast-sql'];const out=[];for(const section of ['dependencies','devDependencies']){for(const [name,version] of Object.entries(pkg[section]||{})){if(needles.some((needle)=>name.includes(needle)))out.push(section+'.'+name+'='+version)}}console.log(out.sort().join('\n'))"

Why Fast SQL

为什么选择Fast SQL

Fast SQL is the preferred migration target because it avoids heavy bridge serialization by using a local HTTP transport to native SQLite. That makes it much faster for large result sets and sync-heavy write patterns.
Fast SQL also provides:
  • transactions with explicit or callback control
  • batch execution for multiple statements
  • BLOB support for binary data
  • encryption and read-only modes
  • KeyValueStore
    for lightweight key-value access on top of SQLite
  • web fallback support through SQL.js
Fast SQL是首选的迁移目标,因为它使用本地HTTP传输访问原生SQLite,避免了繁重的桥接序列化。这让它在处理大数据集和重同步写入模式时速度快得多。
Fast SQL还提供以下功能:
  • 支持显式或回调控制的事务
  • 多语句批量执行
  • 支持二进制数据的BLOB
  • 加密和只读模式
  • 基于SQLite实现轻量级键值访问的
    KeyValueStore
  • 基于SQL.js的Web端降级支持

Migration Procedure

迁移流程

Step 1: Inspect the Current SQL Plugin

步骤1:检查当前SQL插件

Start from the injected package snapshot, then read
package.json
directly if the current SQL plugin set still needs clarification.
Document whether the app uses:
  • raw SQL queries
  • transactions
  • BLOB data
  • migrations/schema bootstrap
  • key-value wrappers
  • encrypted storage
从注入的包快照开始排查,如果当前SQL插件集合仍不明确,可以直接读取
package.json
记录应用是否使用了以下特性:
  • 原生SQL查询
  • 事务
  • BLOB数据
  • 迁移/schema初始化
  • 键值封装
  • 加密存储

Step 2: Map the Current API Surface

步骤2:映射现有API接口

Map the old plugin calls to Fast SQL equivalents:
  • connection setup ->
    FastSQL.connect(...)
  • reads ->
    db.query(...)
  • single-statement writes ->
    db.run(...)
  • multi-statement work ->
    db.executeBatch(...)
  • transactional work ->
    db.transaction(...)
    or explicit
    beginTransaction
    /
    commit
    /
    rollback
  • key-value storage ->
    KeyValueStore.open(...)
将旧插件的调用映射为Fast SQL的等效接口:
  • 连接设置 ->
    FastSQL.connect(...)
  • 数据读取 ->
    db.query(...)
  • 单语句写入 ->
    db.run(...)
  • 多语句操作 ->
    db.executeBatch(...)
  • 事务操作 ->
    db.transaction(...)
    或显式的
    beginTransaction
    /
    commit
    /
    rollback
  • 键值存储 ->
    KeyValueStore.open(...)

Step 3: Install Fast SQL

步骤3:安装Fast SQL

Install the new package with the repository's package manager and sync native projects.
bash
npm install @capgo/capacitor-fast-sql
npx cap sync
If the app ships web support, install
sql.js
for the web fallback when needed.
使用仓库对应的包管理器安装新包,并同步原生项目。
bash
npm install @capgo/capacitor-fast-sql
npx cap sync
如果应用需要支持Web端,按需安装
sql.js
作为Web端降级方案。

Step 4: Update Code

步骤4:更新代码

Replace old plugin imports and APIs with Fast SQL.
Prefer
db.executeBatch(...)
for repeated writes,
db.transaction(...)
for atomic changes, and
KeyValueStore
for simple local key-value data.
Preserve the existing schema and migration steps unless the old plugin forced a different format.
将旧插件的导入和API替换为Fast SQL的对应实现。
对于重复写入场景优先使用
db.executeBatch(...)
,原子变更优先使用
db.transaction(...)
,简单本地键值数据优先使用
KeyValueStore
保留现有schema和迁移步骤,除非旧插件强制使用了不同的格式。

Step 5: Reconfigure Native Platforms

步骤5:重新配置原生平台

Apply the Fast SQL platform setup required by the app:
  • iOS local network access when the plugin needs localhost traffic
  • Android cleartext network configuration for localhost traffic
  • SQLCipher dependency when encrypted mode is enabled on Android
根据应用需求完成Fast SQL的平台设置:
  • 当插件需要本地主机流量时,iOS需开启本地网络访问权限
  • Android需为本地主机流量配置明文网络权限
  • Android启用加密模式时需要安装SQLCipher依赖

Step 6: Remove the Old Plugin

步骤6:移除旧插件

Remove the legacy SQL package from
package.json
, reinstall dependencies, and sync again.
Then run the app's normal database smoke tests or migration verification checks.
package.json
中移除旧的SQL包,重新安装依赖并再次同步。
然后运行应用常规的数据库冒烟测试或迁移验证检查。

Error Handling

错误处理

  • If encrypted storage is required, keep
    encrypted: true
    and provide a strong key before shipping.
  • If the old plugin exposed transactions, use Fast SQL transaction APIs rather than emulating them with ad hoc queries.
  • If the app depends on large result sets, prefer batch queries and avoid bridge-heavy wrappers.
  • If the app already has a well-defined schema migration path, keep it and only swap the storage engine.
  • 如果需要加密存储,请保留
    encrypted: true
    配置,并在发布前提供强密钥。
  • 如果旧插件支持事务,请使用Fast SQL的事务API,而非通过临时查询模拟事务。
  • 如果应用依赖大数据集返回,优先使用批量查询,避免重度依赖桥接的封装。
  • 如果应用已经有完善的schema迁移流程,请保留该流程,仅替换存储引擎即可。