* fix select edit host but not update view (#438) * fix: Checksum issue with chocolatey * fix: Remove homebrew old stuff * Add Korean translation (#439) Co-authored-by: 송준우 <2484@coreit.co.kr> * feat: Automate flatpak * fix: Add imagemagik to electron builder to resolve build error * fix: Build error with runtime repo flag * fix: Flatpak runtime error and install freedesktop ver warning * fix: Flatpak runtime error and install freedesktop ver warning * feat: Re-add homebrew cask and move scripts to backend * fix: No sandbox flag issue * fix: Change name for electron macos cask output * fix: Sandbox error with Linux * fix: Remove comming soon for app stores in readme * Adding Comment at the end of the public_key on the host on deploy (#440) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * -Add New Interface for Credential DB -Add Credential Name as a comment into the server authorized_key file --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * Sudo auto fill password (#441) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * Feature Sudo password auto-fill; * Fix locale json shema; --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * Added Italian Language; (#445) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * Added Italian Language; --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * Auto collapse snippet folders (#448) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * feat: Add collapsable snippets (customizable in user profile) * Translations (#447) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * Added Italian Language; * Fix translations; Removed duplicate keys, synchronised other languages using English as the source, translated added keys, fixed inaccurate translations. --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * Remove PTY-level keepalive (#449) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * Remove PTY-level keepalive to prevent unwanted terminal output; use SSH-level keepalive instead --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * feat: Seperate server stats and tunnel management (improved both UI's) then started initial docker implementation * fix: finalize adding docker to db * feat: Add docker management support (local squash) * Fix RBAC role system bugs and improve UX (#446) * Fix RBAC role system bugs and improve UX - Fix user list dropdown selection in host sharing - Fix role sharing permissions to include role-based access - Fix translation template interpolation for success messages - Standardize system roles to admin and user only - Auto-assign user role to new registrations - Remove blocking confirmation dialogs in modal contexts - Add missing i18n keys for common actions - Fix button type to prevent unintended form submissions * Enhance RBAC system with UI improvements and security fixes - Move role assignment to Users tab with per-user role management - Protect system roles (admin/user) from editing and manual assignment - Simplify permission system: remove Use level, keep View and Manage - Hide Update button and Sharing tab for view-only/shared hosts - Prevent users from sharing hosts with themselves - Unify table and modal styling across admin panels - Auto-assign system roles on user registration - Add permission metadata to host interface * Add empty state message for role assignment - Display helpful message when no custom roles available - Clarify that system roles are auto-assigned - Add noCustomRolesToAssign translation in English and Chinese * fix: Prevent credential sharing errors for shared hosts - Skip credential resolution for shared hosts with credential authentication to prevent decryption errors (credentials are encrypted per-user) - Add warning alert in sharing tab when host uses credential authentication - Inform users that shared users cannot connect to credential-based hosts - Add translations for credential sharing warning (EN/ZH) This prevents authentication failures when sharing hosts configured with credential authentication while maintaining security by keeping credentials isolated per user. * feat: Improve rbac UI and fixes some bugs --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> Co-authored-by: LukeGus <bugattiguy527@gmail.com> * SOCKS5 support (#452) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * SOCKS5 support Adding single and chain socks5 proxy support * fix: cleanup files --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> Co-authored-by: LukeGus <bugattiguy527@gmail.com> * Notes and Expiry fields add (#453) * Add termix.rb Cask file * Update Termix to version 1.9.0 with new checksum * Update README to remove 'coming soon' notes * Notes and Expiry add * fix: cleanup files --------- Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> Co-authored-by: LukeGus <bugattiguy527@gmail.com> * fix: ssh host types * fix: sudo incorrect styling and remove expiration date * feat: add sudo password and add diagonal bg's * fix: snippet running on enter key * fix: base64 decoding * fix: improve server stats / rbac * fix: wrap ssh host json export in hosts array * feat: auto trim host inputs, fix file manager jump hosts, dashboard prevent duplicates, file manager terminal not size updating, improve left sidebar sorting, hide/show tags, add apperance user profile tab, add new host manager tabs. * feat: improve terminal connection speed * fix: sqlite constriant errors and support non-root user (nginx perm issue) * feat: add beta syntax highlighing to terminal * feat: update imports and improve admin settings user management * chore: update translations * chore: update translations * feat: Complete light mode implementation with semantic theme system (#450) - Add comprehensive light/dark mode CSS variables with semantic naming - Implement theme-aware scrollbars using CSS variables - Add light mode backgrounds: --bg-base, --bg-elevated, --bg-surface, etc. - Add theme-aware borders: --border-base, --border-panel, --border-subtle - Add semantic text colors: --foreground-secondary, --foreground-subtle - Convert oklch colors to hex for better compatibility - Add theme awareness to CodeMirror editors - Update dark mode colors for consistency (background, sidebar, card, muted, input) - Add Tailwind color mappings for semantic classes Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com> * fix: syntax errors * chore: updating/match themes and split admin settings * feat: add translation workflow and remove old translation.json * fix: translation workflow error * fix: translation workflow error * feat: improve translation system and update workflow * fix: wrong path for translations * fix: change translation to flat files * fix: gh rule error * chore: auto-translate to multiple languages (#458) * chore: improve organization and made a few styling changes in host manager * feat: improve terminal stability and split out the host manager * fix: add unnversiioned files * chore: migrate all to use the new theme system * fix: wrong animation line colors * fix: rbac implementation general issues (local squash) * fix: remove unneeded files * feat: add 10 new langs * chore: update gitnore * chore: auto-translate to multiple languages (#459) * fix: improve tunnel system * fix: properly split tabs, still need to fix up the host manager * chore: cleanup files (possible RC) * feat: add norwegian * chore: auto-translate to multiple languages (#461) * fix: small qol fixes and began readme update * fix: run cleanup script * feat: add docker docs button * feat: general bug fixes and readme updates * fix: translations * chore: auto-translate to multiple languages (#462) * fix: cleanup files * fix: test new translation issue and add better server-stats support * fix: fix translate error * chore: auto-translate to multiple languages (#463) * fix: fix translate mismatching text * chore: auto-translate to multiple languages (#465) * fix: fix translate mismatching text * fix: fix translate mismatching text * chore: auto-translate to multiple languages (#466) * fix: fix translate mismatching text * fix: fix translate mismatching text * fix: fix translate mismatching text * chore: auto-translate to multiple languages (#467) * fix: fix translate mismatching text * chore: auto-translate to multiple languages (#468) * feat: add to readme, a few qol changes, and improve server stats in general * chore: auto-translate to multiple languages (#469) * feat: turned disk uage into graph and fixed issue with termina console * fix: electron build error and hide icons when shared * chore: run clean * fix: general server stats issues, file manager decoding, ui qol * fix: add dashboard line breaks * fix: docker console error * fix: docker console not loading and mismatched stripped background for electron * fix: docker console not loading * chore: docker console not loading in docker * chore: translate readme to chinese * chore: match package lock to package json * chore: nginx config issue for dokcer console * chore: auto-translate to multiple languages (#470) --------- Co-authored-by: Tran Trung Kien <kientt13.7@gmail.com> Co-authored-by: junu <bigdwarf_@naver.com> Co-authored-by: 송준우 <2484@coreit.co.kr> Co-authored-by: SlimGary <trash.slim@gmail.com> Co-authored-by: Nunzio Marfè <nunzio.marfe@protonmail.com> Co-authored-by: Wesley Reid <starhound@lostsouls.org> Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com> Co-authored-by: Denis <38875137+Medvedinca@users.noreply.github.com> Co-authored-by: Peet McKinney <68706879+PeetMcK@users.noreply.github.com>
749 lines
23 KiB
TypeScript
749 lines
23 KiB
TypeScript
import crypto from "crypto";
|
|
import fs from "fs";
|
|
import path from "path";
|
|
import { databaseLogger } from "./logger.js";
|
|
import { SystemCrypto } from "./system-crypto.js";
|
|
|
|
interface EncryptedFileMetadata {
|
|
iv: string;
|
|
tag: string;
|
|
version: string;
|
|
fingerprint: string;
|
|
algorithm: string;
|
|
keySource?: string;
|
|
salt?: string;
|
|
dataSize?: number;
|
|
}
|
|
|
|
class DatabaseFileEncryption {
|
|
private static readonly VERSION = "v2";
|
|
private static readonly ALGORITHM = "aes-256-gcm";
|
|
private static readonly ENCRYPTED_FILE_SUFFIX = ".encrypted";
|
|
private static readonly METADATA_FILE_SUFFIX = ".meta";
|
|
private static systemCrypto = SystemCrypto.getInstance();
|
|
|
|
static async encryptDatabaseFromBuffer(
|
|
buffer: Buffer,
|
|
targetPath: string,
|
|
): Promise<string> {
|
|
const tmpPath = `${targetPath}.tmp-${Date.now()}-${process.pid}`;
|
|
const metadataPath = `${targetPath}${this.METADATA_FILE_SUFFIX}`;
|
|
|
|
try {
|
|
const key = await this.systemCrypto.getDatabaseKey();
|
|
const iv = crypto.randomBytes(16);
|
|
const cipher = crypto.createCipheriv(
|
|
this.ALGORITHM,
|
|
key,
|
|
iv,
|
|
) as crypto.CipherGCM;
|
|
const encrypted = Buffer.concat([cipher.update(buffer), cipher.final()]);
|
|
const tag = cipher.getAuthTag();
|
|
|
|
const metadata: EncryptedFileMetadata = {
|
|
iv: iv.toString("hex"),
|
|
tag: tag.toString("hex"),
|
|
version: this.VERSION,
|
|
fingerprint: "termix-v2-systemcrypto",
|
|
algorithm: this.ALGORITHM,
|
|
keySource: "SystemCrypto",
|
|
dataSize: encrypted.length,
|
|
};
|
|
|
|
const metadataJson = JSON.stringify(metadata, null, 2);
|
|
const metadataBuffer = Buffer.from(metadataJson, "utf8");
|
|
const metadataLengthBuffer = Buffer.alloc(4);
|
|
metadataLengthBuffer.writeUInt32BE(metadataBuffer.length, 0);
|
|
|
|
const finalBuffer = Buffer.concat([
|
|
metadataLengthBuffer,
|
|
metadataBuffer,
|
|
encrypted,
|
|
]);
|
|
|
|
fs.writeFileSync(tmpPath, finalBuffer);
|
|
fs.renameSync(tmpPath, targetPath);
|
|
|
|
try {
|
|
if (fs.existsSync(metadataPath)) {
|
|
fs.unlinkSync(metadataPath);
|
|
}
|
|
} catch (cleanupError) {
|
|
databaseLogger.warn("Failed to cleanup old metadata file", {
|
|
operation: "old_meta_cleanup_failed",
|
|
path: metadataPath,
|
|
error:
|
|
cleanupError instanceof Error
|
|
? cleanupError.message
|
|
: "Unknown error",
|
|
});
|
|
}
|
|
|
|
return targetPath;
|
|
} catch (error) {
|
|
try {
|
|
if (fs.existsSync(tmpPath)) {
|
|
fs.unlinkSync(tmpPath);
|
|
}
|
|
} catch (cleanupError) {
|
|
databaseLogger.warn("Failed to cleanup temporary files", {
|
|
operation: "temp_file_cleanup_failed",
|
|
tmpPath,
|
|
error:
|
|
cleanupError instanceof Error
|
|
? cleanupError.message
|
|
: "Unknown error",
|
|
});
|
|
}
|
|
|
|
databaseLogger.error("Failed to encrypt database buffer", error, {
|
|
operation: "database_buffer_encryption_failed",
|
|
targetPath,
|
|
});
|
|
throw new Error(
|
|
`Database buffer encryption failed: ${error instanceof Error ? error.message : "Unknown error"}`,
|
|
);
|
|
}
|
|
}
|
|
|
|
static async encryptDatabaseFile(
|
|
sourcePath: string,
|
|
targetPath?: string,
|
|
): Promise<string> {
|
|
if (!fs.existsSync(sourcePath)) {
|
|
throw new Error(`Source database file does not exist: ${sourcePath}`);
|
|
}
|
|
|
|
const encryptedPath =
|
|
targetPath || `${sourcePath}${this.ENCRYPTED_FILE_SUFFIX}`;
|
|
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
|
|
const tmpPath = `${encryptedPath}.tmp-${Date.now()}-${process.pid}`;
|
|
const tmpMetadataPath = `${tmpPath}${this.METADATA_FILE_SUFFIX}`;
|
|
|
|
try {
|
|
const sourceData = fs.readFileSync(sourcePath);
|
|
|
|
const key = await this.systemCrypto.getDatabaseKey();
|
|
|
|
const iv = crypto.randomBytes(16);
|
|
|
|
const cipher = crypto.createCipheriv(
|
|
this.ALGORITHM,
|
|
key,
|
|
iv,
|
|
) as crypto.CipherGCM;
|
|
const encrypted = Buffer.concat([
|
|
cipher.update(sourceData),
|
|
cipher.final(),
|
|
]);
|
|
const tag = cipher.getAuthTag();
|
|
|
|
const keyFingerprint = crypto
|
|
.createHash("sha256")
|
|
.update(key)
|
|
.digest("hex")
|
|
.substring(0, 16);
|
|
|
|
const metadata: EncryptedFileMetadata = {
|
|
iv: iv.toString("hex"),
|
|
tag: tag.toString("hex"),
|
|
version: this.VERSION,
|
|
fingerprint: "termix-v2-systemcrypto",
|
|
algorithm: this.ALGORITHM,
|
|
keySource: "SystemCrypto",
|
|
dataSize: encrypted.length,
|
|
};
|
|
|
|
fs.writeFileSync(tmpPath, encrypted);
|
|
fs.writeFileSync(tmpMetadataPath, JSON.stringify(metadata, null, 2));
|
|
|
|
fs.renameSync(tmpPath, encryptedPath);
|
|
fs.renameSync(tmpMetadataPath, metadataPath);
|
|
|
|
databaseLogger.info("Database file encrypted successfully", {
|
|
operation: "database_file_encryption",
|
|
sourcePath,
|
|
encryptedPath,
|
|
fileSize: sourceData.length,
|
|
encryptedSize: encrypted.length,
|
|
keyFingerprint,
|
|
fingerprintPrefix: metadata.fingerprint,
|
|
});
|
|
|
|
return encryptedPath;
|
|
} catch (error) {
|
|
try {
|
|
if (fs.existsSync(tmpPath)) {
|
|
fs.unlinkSync(tmpPath);
|
|
}
|
|
if (fs.existsSync(tmpMetadataPath)) {
|
|
fs.unlinkSync(tmpMetadataPath);
|
|
}
|
|
} catch (cleanupError) {
|
|
databaseLogger.warn("Failed to cleanup temporary files", {
|
|
operation: "temp_file_cleanup_failed",
|
|
tmpPath,
|
|
error:
|
|
cleanupError instanceof Error
|
|
? cleanupError.message
|
|
: "Unknown error",
|
|
});
|
|
}
|
|
|
|
databaseLogger.error("Failed to encrypt database file", error, {
|
|
operation: "database_file_encryption_failed",
|
|
sourcePath,
|
|
targetPath: encryptedPath,
|
|
});
|
|
throw new Error(
|
|
`Database file encryption failed: ${error instanceof Error ? error.message : "Unknown error"}`,
|
|
);
|
|
}
|
|
}
|
|
|
|
static async decryptDatabaseToBuffer(encryptedPath: string): Promise<Buffer> {
|
|
if (!fs.existsSync(encryptedPath)) {
|
|
throw new Error(
|
|
`Encrypted database file does not exist: ${encryptedPath}`,
|
|
);
|
|
}
|
|
|
|
let metadata: EncryptedFileMetadata;
|
|
let encryptedData: Buffer;
|
|
|
|
const fileBuffer = fs.readFileSync(encryptedPath);
|
|
|
|
try {
|
|
const metadataLength = fileBuffer.readUInt32BE(0);
|
|
const metadataEnd = 4 + metadataLength;
|
|
|
|
if (
|
|
metadataLength <= 0 ||
|
|
metadataEnd > fileBuffer.length ||
|
|
metadataEnd <= 4
|
|
) {
|
|
throw new Error("Invalid metadata length in single-file format");
|
|
}
|
|
|
|
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
|
|
metadata = JSON.parse(metadataJson);
|
|
encryptedData = fileBuffer.slice(metadataEnd);
|
|
|
|
if (!metadata.iv || !metadata.tag || !metadata.version) {
|
|
throw new Error("Invalid metadata structure in single-file format");
|
|
}
|
|
} catch (singleFileError) {
|
|
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
|
|
if (!fs.existsSync(metadataPath)) {
|
|
throw new Error(
|
|
`Could not read database: Not a valid single-file format and metadata file is missing: ${metadataPath}. Error: ${singleFileError.message}`,
|
|
);
|
|
}
|
|
|
|
try {
|
|
const metadataContent = fs.readFileSync(metadataPath, "utf8");
|
|
metadata = JSON.parse(metadataContent);
|
|
encryptedData = fileBuffer;
|
|
} catch (twoFileError) {
|
|
throw new Error(
|
|
`Failed to read database using both single-file and two-file formats. Error: ${twoFileError.message}`,
|
|
);
|
|
}
|
|
}
|
|
|
|
try {
|
|
if (
|
|
metadata.dataSize !== undefined &&
|
|
encryptedData.length !== metadata.dataSize
|
|
) {
|
|
databaseLogger.error(
|
|
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
|
|
null,
|
|
{
|
|
operation: "database_file_size_mismatch",
|
|
encryptedPath,
|
|
actualSize: encryptedData.length,
|
|
expectedSize: metadata.dataSize,
|
|
},
|
|
);
|
|
throw new Error(
|
|
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
|
|
`This indicates corrupted files or interrupted write operation.`,
|
|
);
|
|
}
|
|
|
|
let key: Buffer;
|
|
if (metadata.version === "v2") {
|
|
key = await this.systemCrypto.getDatabaseKey();
|
|
} else if (metadata.version === "v1") {
|
|
databaseLogger.warn(
|
|
"Decrypting legacy v1 encrypted database - consider upgrading",
|
|
{
|
|
operation: "decrypt_legacy_v1",
|
|
path: encryptedPath,
|
|
},
|
|
);
|
|
if (!metadata.salt) {
|
|
throw new Error("v1 encrypted file missing required salt field");
|
|
}
|
|
const salt = Buffer.from(metadata.salt, "hex");
|
|
const fixedSeed =
|
|
process.env.DB_FILE_KEY || "termix-database-file-encryption-seed-v1";
|
|
key = crypto.pbkdf2Sync(fixedSeed, salt, 100000, 32, "sha256");
|
|
} else {
|
|
throw new Error(`Unsupported encryption version: ${metadata.version}`);
|
|
}
|
|
|
|
const decipher = crypto.createDecipheriv(
|
|
metadata.algorithm,
|
|
key,
|
|
Buffer.from(metadata.iv, "hex"),
|
|
) as crypto.DecipherGCM;
|
|
decipher.setAuthTag(Buffer.from(metadata.tag, "hex"));
|
|
|
|
const decryptedBuffer = Buffer.concat([
|
|
decipher.update(encryptedData),
|
|
decipher.final(),
|
|
]);
|
|
|
|
return decryptedBuffer;
|
|
} catch (error) {
|
|
const errorMessage =
|
|
error instanceof Error ? error.message : "Unknown error";
|
|
const isAuthError =
|
|
errorMessage.includes("Unsupported state") ||
|
|
errorMessage.includes("authenticate data") ||
|
|
errorMessage.includes("auth");
|
|
|
|
if (isAuthError) {
|
|
const dataDir = process.env.DATA_DIR || "./db/data";
|
|
const envPath = path.join(dataDir, ".env");
|
|
|
|
let envFileExists = false;
|
|
let envFileReadable = false;
|
|
try {
|
|
envFileExists = fs.existsSync(envPath);
|
|
if (envFileExists) {
|
|
fs.accessSync(envPath, fs.constants.R_OK);
|
|
envFileReadable = true;
|
|
}
|
|
} catch (error) {}
|
|
|
|
databaseLogger.error(
|
|
"Database decryption authentication failed - possible causes: wrong DATABASE_KEY, corrupted files, or interrupted write",
|
|
error,
|
|
{
|
|
operation: "database_buffer_decryption_auth_failed",
|
|
encryptedPath,
|
|
dataDir,
|
|
envPath,
|
|
envFileExists,
|
|
envFileReadable,
|
|
hasEnvKey: !!process.env.DATABASE_KEY,
|
|
envKeyLength: process.env.DATABASE_KEY?.length || 0,
|
|
suggestion:
|
|
"Check if DATABASE_KEY in .env matches the key used for encryption",
|
|
},
|
|
);
|
|
throw new Error(
|
|
`Database decryption authentication failed. This usually means:\n` +
|
|
`1. DATABASE_KEY has changed or is missing from ${dataDir}/.env\n` +
|
|
`2. Encrypted file was corrupted during write (system crash/restart)\n` +
|
|
`3. Metadata file does not match encrypted data\n` +
|
|
`\nDebug info:\n` +
|
|
`- DATA_DIR: ${dataDir}\n` +
|
|
`- .env file exists: ${envFileExists}\n` +
|
|
`- .env file readable: ${envFileReadable}\n` +
|
|
`- DATABASE_KEY in environment: ${!!process.env.DATABASE_KEY}\n` +
|
|
`Original error: ${errorMessage}`,
|
|
);
|
|
}
|
|
|
|
databaseLogger.error("Failed to decrypt database to buffer", error, {
|
|
operation: "database_buffer_decryption_failed",
|
|
encryptedPath,
|
|
errorMessage,
|
|
});
|
|
throw new Error(`Database buffer decryption failed: ${errorMessage}`);
|
|
}
|
|
}
|
|
|
|
static async decryptDatabaseFile(
|
|
encryptedPath: string,
|
|
targetPath?: string,
|
|
): Promise<string> {
|
|
if (!fs.existsSync(encryptedPath)) {
|
|
throw new Error(
|
|
`Encrypted database file does not exist: ${encryptedPath}`,
|
|
);
|
|
}
|
|
|
|
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
|
|
if (!fs.existsSync(metadataPath)) {
|
|
throw new Error(`Metadata file does not exist: ${metadataPath}`);
|
|
}
|
|
|
|
const decryptedPath =
|
|
targetPath || encryptedPath.replace(this.ENCRYPTED_FILE_SUFFIX, "");
|
|
|
|
try {
|
|
const metadataContent = fs.readFileSync(metadataPath, "utf8");
|
|
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
|
|
|
|
const encryptedData = fs.readFileSync(encryptedPath);
|
|
|
|
if (
|
|
metadata.dataSize !== undefined &&
|
|
encryptedData.length !== metadata.dataSize
|
|
) {
|
|
databaseLogger.error(
|
|
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
|
|
null,
|
|
{
|
|
operation: "database_file_size_mismatch",
|
|
encryptedPath,
|
|
actualSize: encryptedData.length,
|
|
expectedSize: metadata.dataSize,
|
|
},
|
|
);
|
|
throw new Error(
|
|
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
|
|
`This indicates corrupted files or interrupted write operation.`,
|
|
);
|
|
}
|
|
|
|
let key: Buffer;
|
|
if (metadata.version === "v2") {
|
|
key = await this.systemCrypto.getDatabaseKey();
|
|
} else if (metadata.version === "v1") {
|
|
databaseLogger.warn(
|
|
"Decrypting legacy v1 encrypted database - consider upgrading",
|
|
{
|
|
operation: "decrypt_legacy_v1",
|
|
path: encryptedPath,
|
|
},
|
|
);
|
|
if (!metadata.salt) {
|
|
throw new Error("v1 encrypted file missing required salt field");
|
|
}
|
|
const salt = Buffer.from(metadata.salt, "hex");
|
|
const fixedSeed =
|
|
process.env.DB_FILE_KEY || "termix-database-file-encryption-seed-v1";
|
|
key = crypto.pbkdf2Sync(fixedSeed, salt, 100000, 32, "sha256");
|
|
} else {
|
|
throw new Error(`Unsupported encryption version: ${metadata.version}`);
|
|
}
|
|
|
|
const decipher = crypto.createDecipheriv(
|
|
metadata.algorithm,
|
|
key,
|
|
Buffer.from(metadata.iv, "hex"),
|
|
) as crypto.DecipherGCM;
|
|
decipher.setAuthTag(Buffer.from(metadata.tag, "hex"));
|
|
|
|
const decrypted = Buffer.concat([
|
|
decipher.update(encryptedData),
|
|
decipher.final(),
|
|
]);
|
|
|
|
fs.writeFileSync(decryptedPath, decrypted);
|
|
|
|
databaseLogger.info("Database file decrypted successfully", {
|
|
operation: "database_file_decryption",
|
|
encryptedPath,
|
|
decryptedPath,
|
|
encryptedSize: encryptedData.length,
|
|
decryptedSize: decrypted.length,
|
|
fingerprintPrefix: metadata.fingerprint,
|
|
});
|
|
|
|
return decryptedPath;
|
|
} catch (error) {
|
|
databaseLogger.error("Failed to decrypt database file", error, {
|
|
operation: "database_file_decryption_failed",
|
|
encryptedPath,
|
|
targetPath: decryptedPath,
|
|
});
|
|
throw new Error(
|
|
`Database file decryption failed: ${error instanceof Error ? error.message : "Unknown error"}`,
|
|
);
|
|
}
|
|
}
|
|
|
|
static isEncryptedDatabaseFile(filePath: string): boolean {
|
|
if (!fs.existsSync(filePath)) {
|
|
return false;
|
|
}
|
|
|
|
const metadataPath = `${filePath}${this.METADATA_FILE_SUFFIX}`;
|
|
if (fs.existsSync(metadataPath)) {
|
|
try {
|
|
const metadataContent = fs.readFileSync(metadataPath, "utf8");
|
|
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
|
|
return (
|
|
metadata.version === this.VERSION &&
|
|
metadata.algorithm === this.ALGORITHM
|
|
);
|
|
} catch {
|
|
return false;
|
|
}
|
|
}
|
|
|
|
try {
|
|
const fileBuffer = fs.readFileSync(filePath);
|
|
if (fileBuffer.length < 4) return false;
|
|
|
|
const metadataLength = fileBuffer.readUInt32BE(0);
|
|
const metadataEnd = 4 + metadataLength;
|
|
|
|
if (metadataLength <= 0 || metadataEnd > fileBuffer.length) {
|
|
return false;
|
|
}
|
|
|
|
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
|
|
const metadata: EncryptedFileMetadata = JSON.parse(metadataJson);
|
|
|
|
return (
|
|
metadata.version === this.VERSION &&
|
|
metadata.algorithm === this.ALGORITHM &&
|
|
!!metadata.iv &&
|
|
!!metadata.tag
|
|
);
|
|
} catch {
|
|
return false;
|
|
}
|
|
}
|
|
|
|
static getEncryptedFileInfo(encryptedPath: string): {
|
|
version: string;
|
|
algorithm: string;
|
|
fingerprint: string;
|
|
isCurrentHardware: boolean;
|
|
fileSize: number;
|
|
} | null {
|
|
if (!this.isEncryptedDatabaseFile(encryptedPath)) {
|
|
return null;
|
|
}
|
|
|
|
try {
|
|
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
|
|
const metadataContent = fs.readFileSync(metadataPath, "utf8");
|
|
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
|
|
|
|
const fileStats = fs.statSync(encryptedPath);
|
|
|
|
return {
|
|
version: metadata.version,
|
|
algorithm: metadata.algorithm,
|
|
fingerprint: metadata.fingerprint,
|
|
isCurrentHardware: true,
|
|
fileSize: fileStats.size,
|
|
};
|
|
} catch {
|
|
return null;
|
|
}
|
|
}
|
|
|
|
static getDiagnosticInfo(encryptedPath: string): {
|
|
dataFile: {
|
|
exists: boolean;
|
|
size?: number;
|
|
mtime?: string;
|
|
readable?: boolean;
|
|
};
|
|
metadataFile: {
|
|
exists: boolean;
|
|
size?: number;
|
|
mtime?: string;
|
|
readable?: boolean;
|
|
content?: EncryptedFileMetadata;
|
|
};
|
|
environment: {
|
|
dataDir: string;
|
|
envPath: string;
|
|
envFileExists: boolean;
|
|
envFileReadable: boolean;
|
|
hasEnvKey: boolean;
|
|
envKeyLength: number;
|
|
};
|
|
validation: {
|
|
filesConsistent: boolean;
|
|
sizeMismatch?: boolean;
|
|
expectedSize?: number;
|
|
actualSize?: number;
|
|
};
|
|
} {
|
|
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
|
|
const dataDir = process.env.DATA_DIR || "./db/data";
|
|
const envPath = path.join(dataDir, ".env");
|
|
|
|
const result: ReturnType<typeof this.getDiagnosticInfo> = {
|
|
dataFile: { exists: false },
|
|
metadataFile: { exists: false },
|
|
environment: {
|
|
dataDir,
|
|
envPath,
|
|
envFileExists: false,
|
|
envFileReadable: false,
|
|
hasEnvKey: !!process.env.DATABASE_KEY,
|
|
envKeyLength: process.env.DATABASE_KEY?.length || 0,
|
|
},
|
|
validation: {
|
|
filesConsistent: false,
|
|
},
|
|
};
|
|
|
|
try {
|
|
result.dataFile.exists = fs.existsSync(encryptedPath);
|
|
if (result.dataFile.exists) {
|
|
try {
|
|
fs.accessSync(encryptedPath, fs.constants.R_OK);
|
|
result.dataFile.readable = true;
|
|
const stats = fs.statSync(encryptedPath);
|
|
result.dataFile.size = stats.size;
|
|
result.dataFile.mtime = stats.mtime.toISOString();
|
|
} catch {
|
|
result.dataFile.readable = false;
|
|
}
|
|
}
|
|
|
|
result.metadataFile.exists = fs.existsSync(metadataPath);
|
|
if (result.metadataFile.exists) {
|
|
try {
|
|
fs.accessSync(metadataPath, fs.constants.R_OK);
|
|
result.metadataFile.readable = true;
|
|
const stats = fs.statSync(metadataPath);
|
|
result.metadataFile.size = stats.size;
|
|
result.metadataFile.mtime = stats.mtime.toISOString();
|
|
|
|
const content = fs.readFileSync(metadataPath, "utf8");
|
|
result.metadataFile.content = JSON.parse(content);
|
|
} catch {
|
|
result.metadataFile.readable = false;
|
|
}
|
|
}
|
|
|
|
result.environment.envFileExists = fs.existsSync(envPath);
|
|
if (result.environment.envFileExists) {
|
|
try {
|
|
fs.accessSync(envPath, fs.constants.R_OK);
|
|
result.environment.envFileReadable = true;
|
|
} catch (error) {}
|
|
}
|
|
|
|
if (
|
|
result.dataFile.exists &&
|
|
result.metadataFile.exists &&
|
|
result.metadataFile.content
|
|
) {
|
|
result.validation.filesConsistent = true;
|
|
|
|
if (result.metadataFile.content.dataSize !== undefined) {
|
|
result.validation.expectedSize = result.metadataFile.content.dataSize;
|
|
result.validation.actualSize = result.dataFile.size;
|
|
result.validation.sizeMismatch =
|
|
result.metadataFile.content.dataSize !== result.dataFile.size;
|
|
if (result.validation.sizeMismatch) {
|
|
result.validation.filesConsistent = false;
|
|
}
|
|
}
|
|
}
|
|
} catch (error) {
|
|
databaseLogger.error("Failed to generate diagnostic info", error, {
|
|
operation: "diagnostic_info_failed",
|
|
encryptedPath,
|
|
});
|
|
}
|
|
|
|
databaseLogger.info("Database encryption diagnostic info", {
|
|
operation: "diagnostic_info_generated",
|
|
...result,
|
|
});
|
|
|
|
return result;
|
|
}
|
|
|
|
static async createEncryptedBackup(
|
|
databasePath: string,
|
|
backupDir: string,
|
|
): Promise<string> {
|
|
if (!fs.existsSync(databasePath)) {
|
|
throw new Error(`Database file does not exist: ${databasePath}`);
|
|
}
|
|
|
|
if (!fs.existsSync(backupDir)) {
|
|
fs.mkdirSync(backupDir, { recursive: true });
|
|
}
|
|
|
|
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
|
|
const backupFileName = `database-backup-${timestamp}.sqlite.encrypted`;
|
|
const backupPath = path.join(backupDir, backupFileName);
|
|
|
|
try {
|
|
const encryptedPath = await this.encryptDatabaseFile(
|
|
databasePath,
|
|
backupPath,
|
|
);
|
|
|
|
return encryptedPath;
|
|
} catch (error) {
|
|
databaseLogger.error("Failed to create encrypted backup", error, {
|
|
operation: "database_backup_failed",
|
|
sourcePath: databasePath,
|
|
backupDir,
|
|
});
|
|
throw error;
|
|
}
|
|
}
|
|
|
|
static async restoreFromEncryptedBackup(
|
|
backupPath: string,
|
|
targetPath: string,
|
|
): Promise<string> {
|
|
if (!this.isEncryptedDatabaseFile(backupPath)) {
|
|
throw new Error("Invalid encrypted backup file");
|
|
}
|
|
|
|
try {
|
|
const restoredPath = await this.decryptDatabaseFile(
|
|
backupPath,
|
|
targetPath,
|
|
);
|
|
|
|
return restoredPath;
|
|
} catch (error) {
|
|
databaseLogger.error("Failed to restore from encrypted backup", error, {
|
|
operation: "database_restore_failed",
|
|
backupPath,
|
|
targetPath,
|
|
});
|
|
throw error;
|
|
}
|
|
}
|
|
|
|
static cleanupTempFiles(basePath: string): void {
|
|
try {
|
|
const tempFiles = [
|
|
`${basePath}.tmp`,
|
|
`${basePath}${this.ENCRYPTED_FILE_SUFFIX}`,
|
|
`${basePath}${this.ENCRYPTED_FILE_SUFFIX}${this.METADATA_FILE_SUFFIX}`,
|
|
];
|
|
|
|
for (const tempFile of tempFiles) {
|
|
if (fs.existsSync(tempFile)) {
|
|
fs.unlinkSync(tempFile);
|
|
}
|
|
}
|
|
} catch (error) {
|
|
databaseLogger.warn("Failed to clean up temporary files", {
|
|
operation: "temp_cleanup_failed",
|
|
basePath,
|
|
error: error instanceof Error ? error.message : "Unknown error",
|
|
});
|
|
}
|
|
}
|
|
}
|
|
|
|
export { DatabaseFileEncryption };
|
|
export type { EncryptedFileMetadata };
|