* fix: Resolve database encryption atomicity issues and enhance debugging (#430)

* fix: Resolve database encryption atomicity issues and enhance debugging

This commit addresses critical data corruption issues caused by non-atomic
file writes during database encryption, and adds comprehensive diagnostic
logging to help debug encryption-related failures.

**Problem:**
Users reported "Unsupported state or unable to authenticate data" errors
when starting the application after system crashes or Docker container
restarts. The root cause was non-atomic writes of encrypted database files:

1. Encrypted data file written (step 1)
2. Metadata file written (step 2)
→ If process crashes between steps 1 and 2, files become inconsistent
→ New IV/tag in data file, old IV/tag in metadata
→ GCM authentication fails on next startup
→ User data permanently inaccessible

**Solution - Atomic Writes:**

1. Write-to-temp + atomic-rename pattern:
   - Write to temporary files (*.tmp-timestamp-pid)
   - Perform atomic rename operations
   - Clean up temp files on failure

2. Data integrity validation:
   - Add dataSize field to metadata
   - Verify file size before decryption
   - Early detection of corrupted writes

3. Enhanced error diagnostics:
   - Key fingerprints (SHA256 prefix) for verification
   - File modification timestamps
   - Detailed GCM auth failure messages
   - Automatic diagnostic info generation

**Changes:**

database-file-encryption.ts:
- Implement atomic write pattern in encryptDatabaseFromBuffer
- Implement atomic write pattern in encryptDatabaseFile
- Add dataSize field to EncryptedFileMetadata interface
- Validate file size before decryption in decryptDatabaseToBuffer
- Enhanced error messages for GCM auth failures
- Add getDiagnosticInfo() function for comprehensive debugging
- Add debug logging for all encryption/decryption operations

system-crypto.ts:
- Add detailed logging for DATABASE_KEY initialization
- Log key source (env var vs .env file)
- Add key fingerprints to all log messages
- Better error messages when key loading fails

db/index.ts:
- Automatically generate diagnostic info on decryption failure
- Log detailed debugging information to help users troubleshoot

**Debugging Info Added:**

- Key initialization: source, fingerprint, length, path
- Encryption: original size, encrypted size, IV/tag prefixes, temp paths
- Decryption: file timestamps, metadata content, key fingerprint matching
- Auth failures: .env file status, key availability, file consistency
- File diagnostics: existence, readability, size validation, mtime comparison

**Backward Compatibility:**
- dataSize field is optional (metadata.dataSize?: number)
- Old encrypted files without dataSize continue to work
- No migration required

**Testing:**
- Compiled successfully
- No breaking changes to existing APIs
- Graceful handling of legacy v1 encrypted files

Fixes data loss issues reported by users experiencing container restarts
and system crashes during database saves.

* fix: Cleanup PR

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: LukeGus <bugattiguy527@gmail.com>
Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: Merge metadata and DB into 1 file

* fix: Add initial command palette

* Feature/german language support (#431)

* Update translation.json

Fixed some translation issues for German, made it more user friendly and common.

* Update translation.json

added updated block for serverStats

* Update translation.json

Added translations

* Update translation.json

Removed duplicate of "free":"Free"

* feat: Finalize command palette

* fix: Several bug fixes for terminals, server stats, and general feature improvements

* feat: Enhanced security, UI improvements, and animations (#432)

* fix: Remove empty catch blocks and add error logging

* refactor: Modularize server stats widget collectors

* feat: Add i18n support for terminal customization and login stats

- Add comprehensive terminal customization translations (60+ keys) for appearance, behavior, and advanced settings across all 4 languages
- Add SSH login statistics translations
- Update HostManagerEditor to use i18n for all terminal customization UI elements
- Update LoginStatsWidget to use i18n for all UI text
- Add missing logger imports in backend files for improved debugging

* feat: Add keyboard shortcut enhancements with Kbd component

- Add shadcn kbd component for displaying keyboard shortcuts
- Enhance file manager context menu to display shortcuts with Kbd component
- Add 5 new keyboard shortcuts to file manager:
  - Ctrl+D: Download selected files
  - Ctrl+N: Create new file
  - Ctrl+Shift+N: Create new folder
  - Ctrl+U: Upload files
  - Enter: Open/run selected file
- Add keyboard shortcut hints to command palette footer
- Create helper function to parse and render keyboard shortcuts

* feat: Add i18n support for command palette

- Add commandPalette translation section with 22 keys to all 4 languages
- Update CommandPalette component to use i18n for all UI text
- Translate search placeholder, group headings, menu items, and shortcut hints
- Support multilingual command palette interface

* feat: Add smooth transitions and animations to UI

- Add fade-in/fade-out transition to command palette (200ms)
- Add scale animation to command palette on open/close
- Add smooth popup animation to context menu (150ms)
- Add visual feedback for file selection with ring effect
- Add hover scale effect to file grid items
- Add transition-all to list view items for consistent behavior
- Zero JavaScript overhead, pure CSS transitions
- All animations under 200ms for instant feel

* feat: Add button active state and dashboard card animations

- Add active:scale-95 to all buttons for tactile click feedback
- Add hover border effect to dashboard cards (150ms transition)
- Add pulse animation to dashboard loading states
- Pure CSS transitions with zero JavaScript overhead
- Improves enterprise-level feel of UI

* feat: Add smooth macOS-style page transitions

- Add fullscreen crossfade transition for login/logout (300ms fade-out + 400ms fade-in)
- Add slide-in-from-right animation for all page switches (Dashboard, Terminal, SSH Manager, Admin, Profile)
- Fix TypeScript compilation by adding esModuleInterop to tsconfig.node.json
- Pass handleLogout from DesktopApp to LeftSidebar for consistent transition behavior

All page transitions now use Tailwind animate-in utilities with 300ms duration for smooth, native-feeling UX

* fix: Add key prop to force animation re-trigger on tab switch

Each page container now has key={currentTab} to ensure React unmounts and remounts the element on every tab switch, properly triggering the slide-in animation

* revert: Remove page transition animations

Page switching animations were not noticeable enough and felt unnecessary.
Keep only the login/logout fullscreen crossfade transitions which provide clear visual feedback for authentication state changes

* feat: Add ripple effect to login/logout transitions

Add three-layer expanding ripple animation during fadeOut phase:
- Ripples expand from screen center using primary theme color
- Each layer has staggered delay (0ms, 150ms, 300ms) for wave effect
- Ripples fade out as they expand to create elegant visual feedback
- Uses pure CSS keyframe animation, no external libraries

Total animation: 800ms ripple + 300ms screen fade

* feat: Add smooth TERMIX logo animation to transitions

Changes:
- Extend transition duration from 300ms/400ms to 800ms/600ms for more elegant feel
- Reduce ripple intensity from /20,/15,/10 to /8,/5 for subtlety
- Slow down ripple animation from 0.8s to 2s with cubic-bezier easing
- Add centered TERMIX logo with monospace font and subtitle
- Logo fades in from 80% scale, holds, then fades out at 110% scale
- Total effect: 1.2s logo animation synced with 2s ripple waves

Creates a premium, branded transition experience

* feat: Enhance transition animation with premium details

Timing adjustments:
- Extend fadeOut from 800ms to 1200ms
- Extend fadeIn from 600ms to 800ms
- Slow background fade to 700ms for elegance

Visual enhancements:
- Add 4-layer ripple waves (10%, 7%, 5%, 3% opacity) with staggered delays
- Ripple animation extended to 2.5s with refined opacity curve
- Logo blur effect: starts at 8px, sharpens to 0px, exits at 4px
- Logo glow effect: triple-layer text-shadow using primary theme color
- Increase logo size from text-6xl to text-7xl
- Subtitle delayed fade-in from bottom with smooth slide animation

Creates a cinematic, polished brand experience

* feat: Redesign login page with split-screen cinematic layout

Major redesign of authentication page:

Left Side (40% width):
- Full-height gradient background using primary theme color
- Large TERMIX logo with glow effect
- Subtitle and tagline
- Infinite animated ripple waves (3 layers)
- Hidden on mobile, shows brand identity

Right Side (60% width):
- Centered glassmorphism card with backdrop blur
- Refined tab switcher with pill-style active state
- Enlarged title with gradient text effect
- Added welcome subtitles for better UX
- Card slides in from bottom on load
- All existing functionality preserved

Visual enhancements:
- Tab navigation: segmented control style in muted container
- Active tab: white background with subtle shadow
- Smooth 200ms transitions on all interactions
- Card: rounded-2xl, shadow-xl, semi-transparent border

Creates premium, modern login experience matching transition animations

* feat: Update login page theme colors and add i18n support

- Changed login page gradient from blue to match dark theme colors
- Updated ripple effects to use theme primary color
- Added i18n translation keys for login page (auth.tagline, auth.description, auth.welcomeBack, auth.createAccount, auth.continueExternal)
- Updated all language files (en, zh, de, ru, pt-BR) with new translations
- Fixed TypeScript compilation issues by clearing build cache

* refactor: Use shadcn Tabs component and fix modal styling

- Replace custom tab navigation with shadcn Tabs component
- Restore border-2 border-dark-border for modal consistency
- Remove circular icon from login success message
- Simplify authentication success display

* refactor: Remove ripple effects and gradient from login page

- Remove animated ripple background effects
- Remove gradient background, use solid color (bg-dark-bg-darker)
- Remove text-shadow glow effect from logo
- Simplify brand showcase to clean, minimal design

* feat: Add decorative slash and remove subtitle from login page

- Add decorative slash divider with gradient lines below TERMIX logo
- Remove subtitle text (welcomeBack and createAccount)
- Simplify page title to show only the main heading

* feat: Add diagonal line pattern background to login page

- Replace decorative slash with subtle diagonal line pattern background
- Use repeating-linear-gradient at 45deg angle
- Set very low opacity (0.03) for subtle effect
- Pattern uses theme primary color

* fix: Display diagonal line pattern on login background

- Combine background color and pattern in single style attribute
- Use white semi-transparent lines (rgba 0.03 opacity)
- 45deg angle, 35px spacing, 2px width
- Remove separate overlay div to ensure pattern visibility

* security: Fix user enumeration vulnerability in login

- Unify error messages for invalid username and incorrect password
- Both return 401 status with 'Invalid username or password'
- Prevent attackers from enumerating valid usernames
- Maintain detailed logging for debugging purposes
- Changed from 404 'User not found' to generic auth failure message

* security: Add login rate limiting to prevent brute force attacks

- Implement LoginRateLimiter with IP and username-based tracking
- Block after 5 failed attempts within 15 minutes
- Lock account/IP for 15 minutes after threshold
- Automatic cleanup of expired entries every 5 minutes
- Track remaining attempts in logs for monitoring
- Return 429 status with remaining time on rate limit
- Reset counters on successful login
- Dual protection: both IP-based and username-based limits

* French translation (#434)

* Adding French Language

* Enhancements

* feat: Replace the old ssh tools system with a new dedicated sidebar

* fix: Merge zac/luke

* fix: Finalize new sidebar, improve and loading animations

* Added ability to close non-primary tabs involved in a split view (#435)

* fix: General bug fixes/small feature improvements

* feat: General UI improvements and translation updates

* fix: Command history and file manager styling issues

* feat: General bug fixes, added server stat commands, improved split screen, link accounts, etc

* fix: add Accept header for OIDC callback request (#436)

* Delete DOWNLOADS.md

* fix: add Accept header for OIDC callback request

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* fix: More bug fixes and QOL fixes

* fix: Server stats not respecting interval and fixed SSH toool type issues

* fix: Remove github links

* fix: Delete account spacing

* fix: Increment version

* fix: Unable to delete hosts and add nginx for terminal

* fix: Unable to delete hosts

* fix: Unable to delete hosts

* fix: Unable to delete hosts

* fix: OIDC/local account linking breaking both logins

* chore: File cleanup

* feat: Max terminal tab size and save current file manager sorting type

* fix: Terminal display issue, migrate host editor to use combobox

* feat: Add snippet folder/customization system

* fix: Fix OIDC linking and prep release

* fix: Increment version

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Max <herzmaximilian@gmail.com>
Co-authored-by: SlimGary <trash.slim@gmail.com>
Co-authored-by: jarrah31 <jarrah31@gmail.com>
Co-authored-by: Kf637 <mail@kf637.tech>
This commit was merged in pull request #437.
This commit is contained in:
Luke Gustafson
2025-11-17 09:46:05 -06:00
committed by GitHub
parent 38a59f3579
commit 8366c99b0f
104 changed files with 16070 additions and 2821 deletions

View File

@@ -7,6 +7,7 @@ import sshRoutes from "./routes/ssh.js";
import alertRoutes from "./routes/alerts.js";
import credentialsRoutes from "./routes/credentials.js";
import snippetsRoutes from "./routes/snippets.js";
import terminalRoutes from "./routes/terminal.js";
import cors from "cors";
import fetch from "node-fetch";
import fs from "fs";
@@ -21,6 +22,7 @@ import { DatabaseMigration } from "../utils/database-migration.js";
import { UserDataExport } from "../utils/user-data-export.js";
import { AutoSSLSetup } from "../utils/auto-ssl-setup.js";
import { eq, and } from "drizzle-orm";
import { parseUserAgent } from "../utils/user-agent-parser.js";
import {
users,
sshData,
@@ -456,8 +458,12 @@ app.post("/database/export", authenticateJWT, async (req, res) => {
code: "PASSWORD_REQUIRED",
});
}
const unlocked = await authManager.authenticateUser(userId, password);
const deviceInfo = parseUserAgent(req);
const unlocked = await authManager.authenticateUser(
userId,
password,
deviceInfo.type,
);
if (!unlocked) {
return res.status(401).json({ error: "Invalid password" });
}
@@ -904,6 +910,7 @@ app.post(
const userId = (req as AuthenticatedRequest).userId;
const { password } = req.body;
const mainDb = getDb();
const deviceInfo = parseUserAgent(req);
const userRecords = await mainDb
.select()
@@ -924,12 +931,19 @@ app.post(
});
}
const unlocked = await authManager.authenticateUser(userId, password);
const unlocked = await authManager.authenticateUser(
userId,
password,
deviceInfo.type,
);
if (!unlocked) {
return res.status(401).json({ error: "Invalid password" });
}
} else if (!DataCrypto.getUserDataKey(userId)) {
const oidcUnlocked = await authManager.authenticateOIDCUser(userId);
const oidcUnlocked = await authManager.authenticateOIDCUser(
userId,
deviceInfo.type,
);
if (!oidcUnlocked) {
return res.status(403).json({
error: "Failed to unlock user data with SSO credentials",
@@ -947,7 +961,10 @@ app.post(
let userDataKey = DataCrypto.getUserDataKey(userId);
if (!userDataKey && isOidcUser) {
const oidcUnlocked = await authManager.authenticateOIDCUser(userId);
const oidcUnlocked = await authManager.authenticateOIDCUser(
userId,
deviceInfo.type,
);
if (oidcUnlocked) {
userDataKey = DataCrypto.getUserDataKey(userId);
}
@@ -1418,6 +1435,7 @@ app.use("/ssh", sshRoutes);
app.use("/alerts", alertRoutes);
app.use("/credentials", credentialsRoutes);
app.use("/snippets", snippetsRoutes);
app.use("/terminal", terminalRoutes);
app.use(
(
@@ -1480,13 +1498,13 @@ app.get(
if (status.hasUnencryptedDb) {
try {
unencryptedSize = fs.statSync(dbPath).size;
} catch {}
} catch (error) {}
}
if (status.hasEncryptedDb) {
try {
encryptedSize = fs.statSync(encryptedDbPath).size;
} catch {}
} catch (error) {}
}
res.json({

View File

@@ -95,6 +95,26 @@ async function initializeDatabaseAsync(): Promise<void> {
databaseKeyLength: process.env.DATABASE_KEY?.length || 0,
});
try {
const diagnosticInfo =
DatabaseFileEncryption.getDiagnosticInfo(encryptedDbPath);
databaseLogger.error(
"Database encryption diagnostic completed - check logs above for details",
null,
{
operation: "db_encryption_diagnostic_completed",
filesConsistent: diagnosticInfo.validation.filesConsistent,
sizeMismatch: diagnosticInfo.validation.sizeMismatch,
},
);
} catch (diagError) {
databaseLogger.warn("Failed to generate diagnostic information", {
operation: "db_diagnostic_failed",
error:
diagError instanceof Error ? diagError.message : "Unknown error",
});
}
throw new Error(
`Database decryption failed: ${error instanceof Error ? error.message : "Unknown error"}. This prevents data loss.`,
);
@@ -120,6 +140,8 @@ async function initializeCompleteDatabase(): Promise<void> {
sqlite = memoryDatabase;
sqlite.exec("PRAGMA foreign_keys = ON");
db = drizzle(sqlite, { schema });
sqlite.exec(`
@@ -157,7 +179,7 @@ async function initializeCompleteDatabase(): Promise<void> {
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
expires_at TEXT NOT NULL,
last_active_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_data (
@@ -188,7 +210,7 @@ async function initializeCompleteDatabase(): Promise<void> {
terminal_config TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_recent (
@@ -198,8 +220,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
last_opened TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_pinned (
@@ -209,8 +231,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
pinned_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_shortcuts (
@@ -220,8 +242,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS dismissed_alerts (
@@ -229,7 +251,7 @@ async function initializeCompleteDatabase(): Promise<void> {
user_id TEXT NOT NULL,
alert_id TEXT NOT NULL,
dismissed_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_credentials (
@@ -249,7 +271,7 @@ async function initializeCompleteDatabase(): Promise<void> {
last_used TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_credential_usage (
@@ -258,9 +280,9 @@ async function initializeCompleteDatabase(): Promise<void> {
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
used_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (credential_id) REFERENCES ssh_credentials (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id),
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (credential_id) REFERENCES ssh_credentials (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS snippets (
@@ -271,7 +293,18 @@ async function initializeCompleteDatabase(): Promise<void> {
description TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_folders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
name TEXT NOT NULL,
color TEXT,
icon TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS recent_activity (
@@ -279,10 +312,20 @@ async function initializeCompleteDatabase(): Promise<void> {
user_id TEXT NOT NULL,
type TEXT NOT NULL,
host_id INTEGER NOT NULL,
host_name TEXT NOT NULL,
host_name TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS command_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
host_id INTEGER NOT NULL,
command TEXT NOT NULL,
executed_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
`);
@@ -343,14 +386,14 @@ const addColumnIfNotExists = (
try {
sqlite
.prepare(
`SELECT ${column}
`SELECT "${column}"
FROM ${table} LIMIT 1`,
)
.get();
} catch {
try {
sqlite.exec(`ALTER TABLE ${table}
ADD COLUMN ${column} ${definition};`);
ADD COLUMN "${column}" ${definition};`);
} catch (alterError) {
databaseLogger.warn(`Failed to add column ${column} to ${table}`, {
operation: "schema_migration",
@@ -405,6 +448,7 @@ const migrateSchema = () => {
"INTEGER NOT NULL DEFAULT 1",
);
addColumnIfNotExists("ssh_data", "tunnel_connections", "TEXT");
addColumnIfNotExists("ssh_data", "jump_hosts", "TEXT");
addColumnIfNotExists(
"ssh_data",
"enable_file_manager",
@@ -428,7 +472,12 @@ const migrateSchema = () => {
addColumnIfNotExists(
"ssh_data",
"credential_id",
"INTEGER REFERENCES ssh_credentials(id)",
"INTEGER REFERENCES ssh_credentials(id) ON DELETE SET NULL",
);
addColumnIfNotExists(
"ssh_data",
"override_credential_username",
"INTEGER",
);
addColumnIfNotExists("ssh_data", "autostart_password", "TEXT");
@@ -436,6 +485,7 @@ const migrateSchema = () => {
addColumnIfNotExists("ssh_data", "autostart_key_password", "TEXT");
addColumnIfNotExists("ssh_data", "stats_config", "TEXT");
addColumnIfNotExists("ssh_data", "terminal_config", "TEXT");
addColumnIfNotExists("ssh_data", "quick_actions", "TEXT");
addColumnIfNotExists("ssh_credentials", "private_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "public_key", "TEXT");
@@ -445,6 +495,35 @@ const migrateSchema = () => {
addColumnIfNotExists("file_manager_pinned", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("file_manager_shortcuts", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("snippets", "folder", "TEXT");
addColumnIfNotExists("snippets", "order", "INTEGER NOT NULL DEFAULT 0");
try {
sqlite
.prepare("SELECT id FROM snippet_folders LIMIT 1")
.get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS snippet_folders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
name TEXT NOT NULL,
color TEXT,
icon TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create snippet_folders table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite
.prepare("SELECT id FROM sessions LIMIT 1")

View File

@@ -34,7 +34,7 @@ export const sessions = sqliteTable("sessions", {
id: text("id").primaryKey(),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
jwtToken: text("jwt_token").notNull(),
deviceType: text("device_type").notNull(),
deviceInfo: text("device_info").notNull(),
@@ -51,7 +51,7 @@ export const sshData = sqliteTable("ssh_data", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
name: text("name"),
ip: text("ip").notNull(),
port: integer("port").notNull(),
@@ -71,7 +71,10 @@ export const sshData = sqliteTable("ssh_data", {
autostartKey: text("autostart_key", { length: 8192 }),
autostartKeyPassword: text("autostart_key_password"),
credentialId: integer("credential_id").references(() => sshCredentials.id),
credentialId: integer("credential_id").references(() => sshCredentials.id, { onDelete: "set null" }),
overrideCredentialUsername: integer("override_credential_username", {
mode: "boolean",
}),
enableTerminal: integer("enable_terminal", { mode: "boolean" })
.notNull()
.default(true),
@@ -79,12 +82,14 @@ export const sshData = sqliteTable("ssh_data", {
.notNull()
.default(true),
tunnelConnections: text("tunnel_connections"),
jumpHosts: text("jump_hosts"),
enableFileManager: integer("enable_file_manager", { mode: "boolean" })
.notNull()
.default(true),
defaultPath: text("default_path"),
statsConfig: text("stats_config"),
terminalConfig: text("terminal_config"),
quickActions: text("quick_actions"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
@@ -97,10 +102,10 @@ export const fileManagerRecent = sqliteTable("file_manager_recent", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
lastOpened: text("last_opened")
@@ -112,10 +117,10 @@ export const fileManagerPinned = sqliteTable("file_manager_pinned", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
pinnedAt: text("pinned_at")
@@ -127,10 +132,10 @@ export const fileManagerShortcuts = sqliteTable("file_manager_shortcuts", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
createdAt: text("created_at")
@@ -142,7 +147,7 @@ export const dismissedAlerts = sqliteTable("dismissed_alerts", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
alertId: text("alert_id").notNull(),
dismissedAt: text("dismissed_at")
.notNull()
@@ -153,7 +158,7 @@ export const sshCredentials = sqliteTable("ssh_credentials", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
description: text("description"),
folder: text("folder"),
@@ -181,13 +186,13 @@ export const sshCredentialUsage = sqliteTable("ssh_credential_usage", {
id: integer("id").primaryKey({ autoIncrement: true }),
credentialId: integer("credential_id")
.notNull()
.references(() => sshCredentials.id),
.references(() => sshCredentials.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
usedAt: text("used_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
@@ -197,10 +202,44 @@ export const snippets = sqliteTable("snippets", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
content: text("content").notNull(),
description: text("description"),
folder: text("folder"),
order: integer("order").notNull().default(0),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const snippetFolders = sqliteTable("snippet_folders", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
color: text("color"),
icon: text("icon"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const sshFolders = sqliteTable("ssh_folders", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
color: text("color"),
icon: text("icon"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
@@ -213,13 +252,27 @@ export const recentActivity = sqliteTable("recent_activity", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
type: text("type").notNull(),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
hostName: text("host_name").notNull(),
.references(() => sshData.id, { onDelete: "cascade" }),
hostName: text("host_name"),
timestamp: text("timestamp")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const commandHistory = sqliteTable("command_history", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
command: text("command").notNull(),
executedAt: text("executed_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});

View File

@@ -524,6 +524,8 @@ router.delete(
return res.status(404).json({ error: "Credential not found" });
}
// Update hosts using this credential to set credentialId to null
// This prevents orphaned references before deletion
const hostsUsingCredential = await db
.select()
.from(sshData)
@@ -552,14 +554,8 @@ router.delete(
);
}
await db
.delete(sshCredentialUsage)
.where(
and(
eq(sshCredentialUsage.credentialId, parseInt(id)),
eq(sshCredentialUsage.userId, userId),
),
);
// sshCredentialUsage will be automatically deleted by ON DELETE CASCADE
// No need for manual deletion
await db
.delete(sshCredentials)
@@ -1259,6 +1255,10 @@ async function deploySSHKeyToHost(
return rejectAdd(err);
}
stream.on("data", () => {
// Consume output
});
stream.on("close", (code) => {
clearTimeout(addTimeout);
if (code === 0) {
@@ -1519,7 +1519,8 @@ router.post(
});
}
if (!credData.publicKey) {
const publicKey = credData.public_key || credData.publicKey;
if (!publicKey) {
return res.status(400).json({
success: false,
error: "Public key is required for deployment",
@@ -1600,7 +1601,7 @@ router.post(
const deployResult = await deploySSHKeyToHost(
hostConfig,
credData.publicKey as string,
publicKey as string,
credData,
);

View File

@@ -1,8 +1,8 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { snippets } from "../db/schema.js";
import { eq, and, desc, sql } from "drizzle-orm";
import { snippets, snippetFolders } from "../db/schema.js";
import { eq, and, desc, asc, sql } from "drizzle-orm";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
@@ -17,6 +17,651 @@ const authManager = AuthManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const requireDataAccess = authManager.createDataAccessMiddleware();
// Get all snippet folders
// GET /snippets/folders
router.get(
"/folders",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for snippet folders fetch");
return res.status(400).json({ error: "Invalid userId" });
}
try {
const result = await db
.select()
.from(snippetFolders)
.where(eq(snippetFolders.userId, userId))
.orderBy(asc(snippetFolders.name));
res.json(result);
} catch (err) {
authLogger.error("Failed to fetch snippet folders", err);
res.status(500).json({ error: "Failed to fetch snippet folders" });
}
},
);
// Create a new snippet folder
// POST /snippets/folders
router.post(
"/folders",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name, color, icon } = req.body;
if (!isNonEmptyString(userId) || !isNonEmptyString(name)) {
authLogger.warn("Invalid snippet folder creation data", {
operation: "snippet_folder_create",
userId,
hasName: !!name,
});
return res.status(400).json({ error: "Folder name is required" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(eq(snippetFolders.userId, userId), eq(snippetFolders.name, name)),
);
if (existing.length > 0) {
return res
.status(409)
.json({ error: "Folder with this name already exists" });
}
const insertData = {
userId,
name: name.trim(),
color: color?.trim() || null,
icon: icon?.trim() || null,
};
const result = await db
.insert(snippetFolders)
.values(insertData)
.returning();
authLogger.success(`Snippet folder created: ${name} by user ${userId}`, {
operation: "snippet_folder_create_success",
userId,
name,
});
res.status(201).json(result[0]);
} catch (err) {
authLogger.error("Failed to create snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to create snippet folder",
});
}
},
);
// Update snippet folder metadata (color, icon)
// PUT /snippets/folders/:name/metadata
router.put(
"/folders/:name/metadata",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name } = req.params;
const { color, icon } = req.body;
if (!isNonEmptyString(userId) || !name) {
authLogger.warn("Invalid request for snippet folder metadata update");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
if (existing.length === 0) {
return res.status(404).json({ error: "Folder not found" });
}
const updateFields: Partial<{
color: string | null;
icon: string | null;
updatedAt: ReturnType<typeof sql.raw>;
}> = {
updatedAt: sql`CURRENT_TIMESTAMP`,
};
if (color !== undefined) updateFields.color = color?.trim() || null;
if (icon !== undefined) updateFields.icon = icon?.trim() || null;
await db
.update(snippetFolders)
.set(updateFields)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
const updated = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
authLogger.success(
`Snippet folder metadata updated: ${name} by user ${userId}`,
{
operation: "snippet_folder_metadata_update_success",
userId,
name,
},
);
res.json(updated[0]);
} catch (err) {
authLogger.error("Failed to update snippet folder metadata", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to update snippet folder metadata",
});
}
},
);
// Rename snippet folder
// PUT /snippets/folders/rename
router.put(
"/folders/rename",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { oldName, newName } = req.body;
if (
!isNonEmptyString(userId) ||
!isNonEmptyString(oldName) ||
!isNonEmptyString(newName)
) {
authLogger.warn("Invalid request for snippet folder rename");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, oldName),
),
);
if (existing.length === 0) {
return res.status(404).json({ error: "Folder not found" });
}
const nameExists = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, newName),
),
);
if (nameExists.length > 0) {
return res
.status(409)
.json({ error: "Folder with new name already exists" });
}
await db
.update(snippetFolders)
.set({ name: newName, updatedAt: sql`CURRENT_TIMESTAMP` })
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, oldName),
),
);
await db
.update(snippets)
.set({ folder: newName })
.where(and(eq(snippets.userId, userId), eq(snippets.folder, oldName)));
authLogger.success(
`Snippet folder renamed: ${oldName} -> ${newName} by user ${userId}`,
{
operation: "snippet_folder_rename_success",
userId,
oldName,
newName,
},
);
res.json({ success: true, oldName, newName });
} catch (err) {
authLogger.error("Failed to rename snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to rename snippet folder",
});
}
},
);
// Delete snippet folder
// DELETE /snippets/folders/:name
router.delete(
"/folders/:name",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name } = req.params;
if (!isNonEmptyString(userId) || !name) {
authLogger.warn("Invalid request for snippet folder delete");
return res.status(400).json({ error: "Invalid request" });
}
try {
const folderName = decodeURIComponent(name);
await db
.update(snippets)
.set({ folder: null })
.where(
and(eq(snippets.userId, userId), eq(snippets.folder, folderName)),
);
await db
.delete(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, folderName),
),
);
authLogger.success(
`Snippet folder deleted: ${folderName} by user ${userId}`,
{
operation: "snippet_folder_delete_success",
userId,
name: folderName,
},
);
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to delete snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to delete snippet folder",
});
}
},
);
// Reorder snippets (bulk update)
// PUT /snippets/reorder
router.put(
"/reorder",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { snippets: snippetUpdates } = req.body;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for snippet reorder");
return res.status(400).json({ error: "Invalid userId" });
}
if (!Array.isArray(snippetUpdates) || snippetUpdates.length === 0) {
authLogger.warn("Invalid snippet reorder data", {
operation: "snippet_reorder",
userId,
});
return res
.status(400)
.json({ error: "snippets array is required and must not be empty" });
}
try {
for (const update of snippetUpdates) {
const { id, order, folder } = update;
if (!id || order === undefined) {
continue;
}
const updateFields: Partial<{
order: number;
folder: string | null;
}> = {
order,
};
if (folder !== undefined) {
updateFields.folder = folder?.trim() || null;
}
await db
.update(snippets)
.set(updateFields)
.where(and(eq(snippets.id, id), eq(snippets.userId, userId)));
}
authLogger.success(`Snippets reordered by user ${userId}`, {
operation: "snippet_reorder_success",
userId,
count: snippetUpdates.length,
});
res.json({ success: true, updated: snippetUpdates.length });
} catch (err) {
authLogger.error("Failed to reorder snippets", err);
res.status(500).json({
error:
err instanceof Error ? err.message : "Failed to reorder snippets",
});
}
},
);
// Execute a snippet on a host
// POST /snippets/execute
router.post(
"/execute",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { snippetId, hostId } = req.body;
if (!isNonEmptyString(userId) || !snippetId || !hostId) {
authLogger.warn("Invalid snippet execution request", {
userId,
snippetId,
hostId,
});
return res
.status(400)
.json({ error: "Snippet ID and Host ID are required" });
}
try {
const snippetResult = await db
.select()
.from(snippets)
.where(
and(
eq(snippets.id, parseInt(snippetId)),
eq(snippets.userId, userId),
),
);
if (snippetResult.length === 0) {
return res.status(404).json({ error: "Snippet not found" });
}
const snippet = snippetResult[0];
const { Client } = await import("ssh2");
const { sshData, sshCredentials } = await import("../db/schema.js");
const { SimpleDBOps } = await import("../../utils/simple-db-ops.js");
const hostResult = await SimpleDBOps.select(
db
.select()
.from(sshData)
.where(
and(eq(sshData.id, parseInt(hostId)), eq(sshData.userId, userId)),
),
"ssh_data",
userId,
);
if (hostResult.length === 0) {
return res.status(404).json({ error: "Host not found" });
}
const host = hostResult[0];
let password = host.password;
let privateKey = host.key;
let passphrase = host.key_password;
let authType = host.authType;
if (host.credentialId) {
const credResult = await SimpleDBOps.select(
db
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, host.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credResult.length > 0) {
const cred = credResult[0];
authType = (cred.auth_type || cred.authType || authType) as string;
password = (cred.password || undefined) as string | undefined;
privateKey = (cred.private_key || cred.key || undefined) as
| string
| undefined;
passphrase = (cred.key_password || undefined) as string | undefined;
}
}
const conn = new Client();
let output = "";
let errorOutput = "";
const executePromise = new Promise<{
success: boolean;
output: string;
error?: string;
}>((resolve, reject) => {
const timeout = setTimeout(() => {
conn.end();
reject(new Error("Command execution timeout (30s)"));
}, 30000);
conn.on("ready", () => {
conn.exec(snippet.content, (err, stream) => {
if (err) {
clearTimeout(timeout);
conn.end();
return reject(err);
}
stream.on("close", () => {
clearTimeout(timeout);
conn.end();
if (errorOutput) {
resolve({ success: false, output, error: errorOutput });
} else {
resolve({ success: true, output });
}
});
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
});
});
});
conn.on("error", (err) => {
clearTimeout(timeout);
reject(err);
});
const config: any = {
host: host.ip,
port: host.port,
username: host.username,
tryKeyboard: true,
keepaliveInterval: 30000,
keepaliveCountMax: 3,
readyTimeout: 30000,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
timeout: 30000,
env: {
TERM: "xterm-256color",
LANG: "en_US.UTF-8",
LC_ALL: "en_US.UTF-8",
LC_CTYPE: "en_US.UTF-8",
LC_MESSAGES: "en_US.UTF-8",
LC_MONETARY: "en_US.UTF-8",
LC_NUMERIC: "en_US.UTF-8",
LC_TIME: "en_US.UTF-8",
LC_COLLATE: "en_US.UTF-8",
COLORTERM: "truecolor",
},
algorithms: {
kex: [
"curve25519-sha256",
"curve25519-sha256@libssh.org",
"ecdh-sha2-nistp521",
"ecdh-sha2-nistp384",
"ecdh-sha2-nistp256",
"diffie-hellman-group-exchange-sha256",
"diffie-hellman-group14-sha256",
"diffie-hellman-group14-sha1",
"diffie-hellman-group-exchange-sha1",
"diffie-hellman-group1-sha1",
],
serverHostKey: [
"ssh-ed25519",
"ecdsa-sha2-nistp521",
"ecdsa-sha2-nistp384",
"ecdsa-sha2-nistp256",
"rsa-sha2-512",
"rsa-sha2-256",
"ssh-rsa",
"ssh-dss",
],
cipher: [
"chacha20-poly1305@openssh.com",
"aes256-gcm@openssh.com",
"aes128-gcm@openssh.com",
"aes256-ctr",
"aes192-ctr",
"aes128-ctr",
"aes256-cbc",
"aes192-cbc",
"aes128-cbc",
"3des-cbc",
],
hmac: [
"hmac-sha2-512-etm@openssh.com",
"hmac-sha2-256-etm@openssh.com",
"hmac-sha2-512",
"hmac-sha2-256",
"hmac-sha1",
"hmac-md5",
],
compress: ["none", "zlib@openssh.com", "zlib"],
},
};
if (authType === "password" && password) {
config.password = password;
} else if (authType === "key" && privateKey) {
const cleanKey = (privateKey as string)
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (passphrase) {
config.passphrase = passphrase;
}
} else if (password) {
config.password = password;
} else if (privateKey) {
const cleanKey = (privateKey as string)
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (passphrase) {
config.passphrase = passphrase;
}
}
conn.connect(config);
});
const result = await executePromise;
authLogger.success(
`Snippet executed: ${snippet.name} on host ${hostId}`,
{
operation: "snippet_execute_success",
userId,
snippetId,
hostId,
},
);
res.json(result);
} catch (err) {
authLogger.error("Failed to execute snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to execute snippet",
});
}
},
);
// Get all snippets for the authenticated user
// GET /snippets
router.get(
@@ -36,7 +681,12 @@ router.get(
.select()
.from(snippets)
.where(eq(snippets.userId, userId))
.orderBy(desc(snippets.updatedAt));
.orderBy(
sql`CASE WHEN ${snippets.folder} IS NULL OR ${snippets.folder} = '' THEN 0 ELSE 1 END`,
asc(snippets.folder),
asc(snippets.order),
desc(snippets.updatedAt),
);
res.json(result);
} catch (err) {
@@ -93,7 +743,7 @@ router.post(
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name, content, description } = req.body;
const { name, content, description, folder, order } = req.body;
if (
!isNonEmptyString(userId) ||
@@ -110,11 +760,31 @@ router.post(
}
try {
let snippetOrder = order;
if (snippetOrder === undefined || snippetOrder === null) {
const folderValue = folder?.trim() || "";
const maxOrderResult = await db
.select({ maxOrder: sql<number>`MAX(${snippets.order})` })
.from(snippets)
.where(
and(
eq(snippets.userId, userId),
folderValue
? eq(snippets.folder, folderValue)
: sql`(${snippets.folder} IS NULL OR ${snippets.folder} = '')`,
),
);
const maxOrder = maxOrderResult[0]?.maxOrder ?? -1;
snippetOrder = maxOrder + 1;
}
const insertData = {
userId,
name: name.trim(),
content: content.trim(),
description: description?.trim() || null,
folder: folder?.trim() || null,
order: snippetOrder,
};
const result = await db.insert(snippets).values(insertData).returning();
@@ -167,6 +837,8 @@ router.put(
name: string;
content: string;
description: string | null;
folder: string | null;
order: number;
}> = {
updatedAt: sql`CURRENT_TIMESTAMP`,
};
@@ -177,6 +849,9 @@ router.put(
updateFields.content = updateData.content.trim();
if (updateData.description !== undefined)
updateFields.description = updateData.description?.trim() || null;
if (updateData.folder !== undefined)
updateFields.folder = updateData.folder?.trim() || null;
if (updateData.order !== undefined) updateFields.order = updateData.order;
await db
.update(snippets)

View File

@@ -8,6 +8,9 @@ import {
fileManagerRecent,
fileManagerPinned,
fileManagerShortcuts,
sshFolders,
commandHistory,
recentActivity,
} from "../db/schema.js";
import { eq, and, desc, isNotNull, or } from "drizzle-orm";
import type { Request, Response } from "express";
@@ -234,6 +237,8 @@ router.post(
enableFileManager,
defaultPath,
tunnelConnections,
jumpHosts,
quickActions,
statsConfig,
terminalConfig,
forceKeyboardInteractive,
@@ -270,6 +275,10 @@ router.post(
tunnelConnections: Array.isArray(tunnelConnections)
? JSON.stringify(tunnelConnections)
: null,
jumpHosts: Array.isArray(jumpHosts) ? JSON.stringify(jumpHosts) : null,
quickActions: Array.isArray(quickActions)
? JSON.stringify(quickActions)
: null,
enableFileManager: enableFileManager ? 1 : 0,
defaultPath: defaultPath || null,
statsConfig: statsConfig ? JSON.stringify(statsConfig) : null,
@@ -328,6 +337,9 @@ router.post(
tunnelConnections: createdHost.tunnelConnections
? JSON.parse(createdHost.tunnelConnections as string)
: [],
jumpHosts: createdHost.jumpHosts
? JSON.parse(createdHost.jumpHosts as string)
: [],
enableFileManager: !!createdHost.enableFileManager,
statsConfig: createdHost.statsConfig
? JSON.parse(createdHost.statsConfig as string)
@@ -349,6 +361,28 @@ router.post(
},
);
try {
const axios = (await import("axios")).default;
const statsPort = process.env.STATS_PORT || 30005;
await axios.post(
`http://localhost:${statsPort}/host-updated`,
{ hostId: createdHost.id },
{
headers: {
Authorization: req.headers.authorization || "",
Cookie: req.headers.cookie || "",
},
timeout: 5000,
},
);
} catch (err) {
sshLogger.warn("Failed to notify stats server of new host", {
operation: "host_create",
hostId: createdHost.id as number,
error: err instanceof Error ? err.message : String(err),
});
}
res.json(resolvedHost);
} catch (err) {
sshLogger.error("Failed to save SSH host to database", err, {
@@ -369,6 +403,7 @@ router.post(
router.put(
"/db/host/:id",
authenticateJWT,
requireDataAccess,
upload.single("key"),
async (req: Request, res: Response) => {
const hostId = req.params.id;
@@ -424,6 +459,8 @@ router.put(
enableFileManager,
defaultPath,
tunnelConnections,
jumpHosts,
quickActions,
statsConfig,
terminalConfig,
forceKeyboardInteractive,
@@ -461,6 +498,10 @@ router.put(
tunnelConnections: Array.isArray(tunnelConnections)
? JSON.stringify(tunnelConnections)
: null,
jumpHosts: Array.isArray(jumpHosts) ? JSON.stringify(jumpHosts) : null,
quickActions: Array.isArray(quickActions)
? JSON.stringify(quickActions)
: null,
enableFileManager: enableFileManager ? 1 : 0,
defaultPath: defaultPath || null,
statsConfig: statsConfig ? JSON.stringify(statsConfig) : null,
@@ -537,6 +578,9 @@ router.put(
tunnelConnections: updatedHost.tunnelConnections
? JSON.parse(updatedHost.tunnelConnections as string)
: [],
jumpHosts: updatedHost.jumpHosts
? JSON.parse(updatedHost.jumpHosts as string)
: [],
enableFileManager: !!updatedHost.enableFileManager,
statsConfig: updatedHost.statsConfig
? JSON.parse(updatedHost.statsConfig as string)
@@ -558,6 +602,28 @@ router.put(
},
);
try {
const axios = (await import("axios")).default;
const statsPort = process.env.STATS_PORT || 30005;
await axios.post(
`http://localhost:${statsPort}/host-updated`,
{ hostId: parseInt(hostId) },
{
headers: {
Authorization: req.headers.authorization || "",
Cookie: req.headers.cookie || "",
},
timeout: 5000,
},
);
} catch (err) {
sshLogger.warn("Failed to notify stats server of host update", {
operation: "host_update",
hostId: parseInt(hostId),
error: err instanceof Error ? err.message : String(err),
});
}
res.json(resolvedHost);
} catch (err) {
sshLogger.error("Failed to update SSH host in database", err, {
@@ -576,67 +642,77 @@ router.put(
// Route: Get SSH data for the authenticated user (requires JWT)
// GET /ssh/host
router.get("/db/host", authenticateJWT, async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
sshLogger.warn("Invalid userId for SSH data fetch", {
operation: "host_fetch",
userId,
});
return res.status(400).json({ error: "Invalid userId" });
}
try {
const data = await SimpleDBOps.select(
db.select().from(sshData).where(eq(sshData.userId, userId)),
"ssh_data",
userId,
);
router.get(
"/db/host",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
sshLogger.warn("Invalid userId for SSH data fetch", {
operation: "host_fetch",
userId,
});
return res.status(400).json({ error: "Invalid userId" });
}
try {
const data = await SimpleDBOps.select(
db.select().from(sshData).where(eq(sshData.userId, userId)),
"ssh_data",
userId,
);
const result = await Promise.all(
data.map(async (row: Record<string, unknown>) => {
const baseHost = {
...row,
tags:
typeof row.tags === "string"
? row.tags
? row.tags.split(",").filter(Boolean)
: []
const result = await Promise.all(
data.map(async (row: Record<string, unknown>) => {
const baseHost = {
...row,
tags:
typeof row.tags === "string"
? row.tags
? row.tags.split(",").filter(Boolean)
: []
: [],
pin: !!row.pin,
enableTerminal: !!row.enableTerminal,
enableTunnel: !!row.enableTunnel,
tunnelConnections: row.tunnelConnections
? JSON.parse(row.tunnelConnections as string)
: [],
pin: !!row.pin,
enableTerminal: !!row.enableTerminal,
enableTunnel: !!row.enableTunnel,
tunnelConnections: row.tunnelConnections
? JSON.parse(row.tunnelConnections as string)
: [],
enableFileManager: !!row.enableFileManager,
statsConfig: row.statsConfig
? JSON.parse(row.statsConfig as string)
: undefined,
terminalConfig: row.terminalConfig
? JSON.parse(row.terminalConfig as string)
: undefined,
forceKeyboardInteractive: row.forceKeyboardInteractive === "true",
};
jumpHosts: row.jumpHosts ? JSON.parse(row.jumpHosts as string) : [],
quickActions: row.quickActions
? JSON.parse(row.quickActions as string)
: [],
enableFileManager: !!row.enableFileManager,
statsConfig: row.statsConfig
? JSON.parse(row.statsConfig as string)
: undefined,
terminalConfig: row.terminalConfig
? JSON.parse(row.terminalConfig as string)
: undefined,
forceKeyboardInteractive: row.forceKeyboardInteractive === "true",
};
return (await resolveHostCredentials(baseHost)) || baseHost;
}),
);
return (await resolveHostCredentials(baseHost)) || baseHost;
}),
);
res.json(result);
} catch (err) {
sshLogger.error("Failed to fetch SSH hosts from database", err, {
operation: "host_fetch",
userId,
});
res.status(500).json({ error: "Failed to fetch SSH data" });
}
});
res.json(result);
} catch (err) {
sshLogger.error("Failed to fetch SSH hosts from database", err, {
operation: "host_fetch",
userId,
});
res.status(500).json({ error: "Failed to fetch SSH data" });
}
},
);
// Route: Get SSH host by ID (requires JWT)
// GET /ssh/host/:id
router.get(
"/db/host/:id",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const hostId = req.params.id;
const userId = (req as AuthenticatedRequest).userId;
@@ -679,6 +755,8 @@ router.get(
tunnelConnections: host.tunnelConnections
? JSON.parse(host.tunnelConnections)
: [],
jumpHosts: host.jumpHosts ? JSON.parse(host.jumpHosts) : [],
quickActions: host.quickActions ? JSON.parse(host.quickActions) : [],
enableFileManager: !!host.enableFileManager,
statsConfig: host.statsConfig
? JSON.parse(host.statsConfig)
@@ -783,6 +861,7 @@ router.get(
router.delete(
"/db/host/:id",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const hostId = req.params.id;
@@ -816,8 +895,8 @@ router.delete(
.delete(fileManagerRecent)
.where(
and(
eq(fileManagerRecent.userId, userId),
eq(fileManagerRecent.hostId, numericHostId),
eq(fileManagerRecent.userId, userId),
),
);
@@ -825,8 +904,8 @@ router.delete(
.delete(fileManagerPinned)
.where(
and(
eq(fileManagerPinned.userId, userId),
eq(fileManagerPinned.hostId, numericHostId),
eq(fileManagerPinned.userId, userId),
),
);
@@ -834,8 +913,17 @@ router.delete(
.delete(fileManagerShortcuts)
.where(
and(
eq(fileManagerShortcuts.userId, userId),
eq(fileManagerShortcuts.hostId, numericHostId),
eq(fileManagerShortcuts.userId, userId),
),
);
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.hostId, numericHostId),
eq(commandHistory.userId, userId),
),
);
@@ -843,8 +931,17 @@ router.delete(
.delete(sshCredentialUsage)
.where(
and(
eq(sshCredentialUsage.userId, userId),
eq(sshCredentialUsage.hostId, numericHostId),
eq(sshCredentialUsage.userId, userId),
),
);
await db
.delete(recentActivity)
.where(
and(
eq(recentActivity.hostId, numericHostId),
eq(recentActivity.userId, userId),
),
);
@@ -865,6 +962,28 @@ router.delete(
},
);
try {
const axios = (await import("axios")).default;
const statsPort = process.env.STATS_PORT || 30005;
await axios.post(
`http://localhost:${statsPort}/host-deleted`,
{ hostId: numericHostId },
{
headers: {
Authorization: req.headers.authorization || "",
Cookie: req.headers.cookie || "",
},
timeout: 5000,
},
);
} catch (err) {
sshLogger.warn("Failed to notify stats server of host deletion", {
operation: "host_delete",
hostId: numericHostId,
error: err instanceof Error ? err.message : String(err),
});
}
res.json({ message: "SSH host deleted" });
} catch (err) {
sshLogger.error("Failed to delete SSH host from database", err, {
@@ -1241,6 +1360,94 @@ router.delete(
},
);
// Route: Get command history for a host
// GET /ssh/command-history/:hostId
router.get(
"/command-history/:hostId",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const hostId = parseInt(req.params.hostId, 10);
if (!isNonEmptyString(userId) || !hostId) {
sshLogger.warn("Invalid userId or hostId for command history fetch", {
operation: "command_history_fetch",
hostId,
userId,
});
return res.status(400).json({ error: "Invalid userId or hostId" });
}
try {
const history = await db
.select({
id: commandHistory.id,
command: commandHistory.command,
})
.from(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostId),
),
)
.orderBy(desc(commandHistory.executedAt))
.limit(200);
res.json(history.map((h) => h.command));
} catch (err) {
sshLogger.error("Failed to fetch command history from database", err, {
operation: "command_history_fetch",
hostId,
userId,
});
res.status(500).json({ error: "Failed to fetch command history" });
}
},
);
// Route: Delete command from history
// DELETE /ssh/command-history
router.delete(
"/command-history",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId, command } = req.body;
if (!isNonEmptyString(userId) || !hostId || !command) {
sshLogger.warn("Invalid data for command history deletion", {
operation: "command_history_delete",
hostId,
userId,
});
return res.status(400).json({ error: "Invalid data" });
}
try {
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostId),
eq(commandHistory.command, command),
),
);
res.json({ message: "Command deleted from history" });
} catch (err) {
sshLogger.error("Failed to delete command from history", err, {
operation: "command_history_delete",
hostId,
userId,
command,
});
res.status(500).json({ error: "Failed to delete command" });
}
},
);
async function resolveHostCredentials(
host: Record<string, unknown>,
): Promise<Record<string, unknown>> {
@@ -1341,6 +1548,16 @@ router.put(
DatabaseSaveTrigger.triggerSave("folder_rename");
await db
.update(sshFolders)
.set({
name: newName,
updatedAt: new Date().toISOString(),
})
.where(
and(eq(sshFolders.userId, userId), eq(sshFolders.name, oldName)),
);
res.json({
message: "Folder renamed successfully",
updatedHosts: updatedHosts.length,
@@ -1358,6 +1575,170 @@ router.put(
},
);
// Route: Get all folders with metadata (requires JWT)
// GET /ssh/db/folders
router.get("/folders", authenticateJWT, async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
return res.status(400).json({ error: "Invalid user ID" });
}
try {
const folders = await db
.select()
.from(sshFolders)
.where(eq(sshFolders.userId, userId));
res.json(folders);
} catch (err) {
sshLogger.error("Failed to fetch folders", err, {
operation: "fetch_folders",
userId,
});
res.status(500).json({ error: "Failed to fetch folders" });
}
});
// Route: Update folder metadata (requires JWT)
// PUT /ssh/db/folders/metadata
router.put(
"/folders/metadata",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name, color, icon } = req.body;
if (!isNonEmptyString(userId) || !name) {
return res.status(400).json({ error: "Folder name is required" });
}
try {
const existing = await db
.select()
.from(sshFolders)
.where(and(eq(sshFolders.userId, userId), eq(sshFolders.name, name)))
.limit(1);
if (existing.length > 0) {
await db
.update(sshFolders)
.set({
color,
icon,
updatedAt: new Date().toISOString(),
})
.where(and(eq(sshFolders.userId, userId), eq(sshFolders.name, name)));
} else {
await db.insert(sshFolders).values({
userId,
name,
color,
icon,
createdAt: new Date().toISOString(),
updatedAt: new Date().toISOString(),
});
}
DatabaseSaveTrigger.triggerSave("folder_metadata_update");
res.json({ message: "Folder metadata updated successfully" });
} catch (err) {
sshLogger.error("Failed to update folder metadata", err, {
operation: "update_folder_metadata",
userId,
name,
});
res.status(500).json({ error: "Failed to update folder metadata" });
}
},
);
// Route: Delete all hosts in folder (requires JWT)
// DELETE /ssh/db/folders/:name/hosts
router.delete(
"/folders/:name/hosts",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const folderName = req.params.name;
if (!isNonEmptyString(userId) || !folderName) {
return res.status(400).json({ error: "Invalid folder name" });
}
try {
const hostsToDelete = await db
.select()
.from(sshData)
.where(and(eq(sshData.userId, userId), eq(sshData.folder, folderName)));
if (hostsToDelete.length === 0) {
return res.json({
message: "No hosts found in folder",
deletedCount: 0,
});
}
await db
.delete(sshData)
.where(and(eq(sshData.userId, userId), eq(sshData.folder, folderName)));
await db
.delete(sshFolders)
.where(
and(eq(sshFolders.userId, userId), eq(sshFolders.name, folderName)),
);
DatabaseSaveTrigger.triggerSave("folder_hosts_delete");
try {
const axios = (await import("axios")).default;
const statsPort = process.env.STATS_PORT || 30005;
for (const host of hostsToDelete) {
try {
await axios.post(
`http://localhost:${statsPort}/host-deleted`,
{ hostId: host.id },
{
headers: {
Authorization: req.headers.authorization || "",
Cookie: req.headers.cookie || "",
},
timeout: 5000,
},
);
} catch (err) {
sshLogger.warn("Failed to notify stats server of host deletion", {
operation: "folder_hosts_delete",
hostId: host.id,
error: err instanceof Error ? err.message : String(err),
});
}
}
} catch (err) {
sshLogger.warn("Failed to notify stats server of folder deletion", {
operation: "folder_hosts_delete",
folderName,
error: err instanceof Error ? err.message : String(err),
});
}
res.json({
message: "All hosts in folder deleted successfully",
deletedCount: hostsToDelete.length,
});
} catch (err) {
sshLogger.error("Failed to delete hosts in folder", err, {
operation: "delete_folder_hosts",
userId,
folderName,
});
res.status(500).json({ error: "Failed to delete hosts in folder" });
}
},
);
// Route: Bulk import SSH hosts (requires JWT)
// POST /ssh/bulk-import
router.post(

View File

@@ -0,0 +1,195 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { commandHistory } from "../db/schema.js";
import { eq, and, desc, sql } from "drizzle-orm";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
const router = express.Router();
function isNonEmptyString(val: unknown): val is string {
return typeof val === "string" && val.trim().length > 0;
}
const authManager = AuthManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const requireDataAccess = authManager.createDataAccessMiddleware();
// Save command to history
// POST /terminal/command_history
router.post(
"/command_history",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId, command } = req.body;
if (!isNonEmptyString(userId) || !hostId || !isNonEmptyString(command)) {
authLogger.warn("Invalid command history save request", {
operation: "command_history_save",
userId,
hasHostId: !!hostId,
hasCommand: !!command,
});
return res.status(400).json({ error: "Missing required parameters" });
}
try {
const insertData = {
userId,
hostId: parseInt(hostId, 10),
command: command.trim(),
};
const result = await db
.insert(commandHistory)
.values(insertData)
.returning();
res.status(201).json(result[0]);
} catch (err) {
authLogger.error("Failed to save command to history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to save command",
});
}
},
);
// Get command history for a specific host
// GET /terminal/command_history/:hostId
router.get(
"/command_history/:hostId",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.params;
const hostIdNum = parseInt(hostId, 10);
if (!isNonEmptyString(userId) || isNaN(hostIdNum)) {
authLogger.warn("Invalid command history fetch request", {
userId,
hostId: hostIdNum,
});
return res.status(400).json({ error: "Invalid request parameters" });
}
try {
const result = await db
.select({
command: commandHistory.command,
maxExecutedAt: sql<number>`MAX(${commandHistory.executedAt})`,
})
.from(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
),
)
.groupBy(commandHistory.command)
.orderBy(desc(sql`MAX(${commandHistory.executedAt})`))
.limit(500);
const uniqueCommands = result.map((r) => r.command);
res.json(uniqueCommands);
} catch (err) {
authLogger.error("Failed to fetch command history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to fetch history",
});
}
},
);
// Delete a specific command from history
// POST /terminal/command_history/delete
router.post(
"/command_history/delete",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId, command } = req.body;
if (!isNonEmptyString(userId) || !hostId || !isNonEmptyString(command)) {
authLogger.warn("Invalid command delete request", {
operation: "command_history_delete",
userId,
hasHostId: !!hostId,
hasCommand: !!command,
});
return res.status(400).json({ error: "Missing required parameters" });
}
try {
const hostIdNum = parseInt(hostId, 10);
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
eq(commandHistory.command, command.trim()),
),
);
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to delete command from history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to delete command",
});
}
},
);
// Clear command history for a specific host (optional feature)
// DELETE /terminal/command_history/:hostId
router.delete(
"/command_history/:hostId",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.params;
const hostIdNum = parseInt(hostId, 10);
if (!isNonEmptyString(userId) || isNaN(hostIdNum)) {
authLogger.warn("Invalid command history clear request");
return res.status(400).json({ error: "Invalid request" });
}
try {
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
),
);
authLogger.success(`Command history cleared for host ${hostId}`, {
operation: "command_history_clear_success",
userId,
hostId: hostIdNum,
});
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to clear command history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to clear history",
});
}
},
);
export default router;

View File

@@ -22,11 +22,12 @@ import { nanoid } from "nanoid";
import speakeasy from "speakeasy";
import QRCode from "qrcode";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { authLogger, databaseLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
import { DataCrypto } from "../../utils/data-crypto.js";
import { LazyFieldEncryption } from "../../utils/lazy-field-encryption.js";
import { parseUserAgent } from "../../utils/user-agent-parser.js";
import { loginRateLimiter } from "../../utils/login-rate-limiter.js";
const authManager = AuthManager.getInstance();
@@ -226,6 +227,16 @@ router.post("/create", async (req, res) => {
});
}
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error("Failed to persist user to disk", saveError, {
operation: "user_create_save_failed",
userId: id,
});
}
authLogger.success(
`Traditional user created: ${username} (is_admin: ${isFirstUser})`,
{
@@ -587,6 +598,7 @@ router.get("/oidc/callback", async (req, res) => {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
Accept: "application/json",
},
body: new URLSearchParams({
grant_type: "authorization_code",
@@ -736,12 +748,11 @@ router.get("/oidc/callback", async (req, res) => {
});
}
const deviceInfo = parseUserAgent(req);
let user = await db
.select()
.from(users)
.where(
and(eq(users.is_oidc, true), eq(users.oidc_identifier, identifier)),
);
.where(eq(users.oidc_identifier, identifier));
let isFirstUser = false;
if (!user || user.length === 0) {
@@ -750,6 +761,43 @@ router.get("/oidc/callback", async (req, res) => {
.get();
isFirstUser = ((countResult as { count?: number })?.count || 0) === 0;
if (!isFirstUser) {
try {
const regRow = db.$client
.prepare(
"SELECT value FROM settings WHERE key = 'allow_registration'",
)
.get();
if (regRow && (regRow as Record<string, unknown>).value !== "true") {
authLogger.warn(
"OIDC user attempted to register when registration is disabled",
{
operation: "oidc_registration_disabled",
identifier,
name,
},
);
let frontendUrl = (redirectUri as string).replace(
"/users/oidc/callback",
"",
);
if (frontendUrl.includes("localhost")) {
frontendUrl = "http://localhost:5173";
}
const redirectUrl = new URL(frontendUrl);
redirectUrl.searchParams.set("error", "registration_disabled");
return res.redirect(redirectUrl.toString());
}
} catch (e) {
authLogger.warn("Failed to check registration status during OIDC", {
operation: "oidc_registration_check",
error: e,
});
}
}
const id = nanoid();
await db.insert(users).values({
id,
@@ -769,7 +817,11 @@ router.get("/oidc/callback", async (req, res) => {
});
try {
await authManager.registerOIDCUser(id);
const sessionDurationMs =
deviceInfo.type === "desktop" || deviceInfo.type === "mobile"
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
await authManager.registerOIDCUser(id, sessionDurationMs);
} catch (encryptionError) {
await db.delete(users).where(eq(users.id, id));
authLogger.error(
@@ -785,12 +837,27 @@ router.get("/oidc/callback", async (req, res) => {
});
}
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error("Failed to persist OIDC user to disk", saveError, {
operation: "oidc_user_create_save_failed",
userId: id,
});
}
user = await db.select().from(users).where(eq(users.id, id));
} else {
await db
.update(users)
.set({ username: name })
.where(eq(users.id, user[0].id));
const isDualAuth =
user[0].password_hash && user[0].password_hash.trim() !== "";
if (!isDualAuth) {
await db
.update(users)
.set({ username: name })
.where(eq(users.id, user[0].id));
}
user = await db.select().from(users).where(eq(users.id, user[0].id));
}
@@ -798,7 +865,7 @@ router.get("/oidc/callback", async (req, res) => {
const userRecord = user[0];
try {
await authManager.authenticateOIDCUser(userRecord.id);
await authManager.authenticateOIDCUser(userRecord.id, deviceInfo.type);
} catch (setupError) {
authLogger.error("Failed to setup OIDC user encryption", setupError, {
operation: "oidc_user_encryption_setup_failed",
@@ -806,7 +873,6 @@ router.get("/oidc/callback", async (req, res) => {
});
}
const deviceInfo = parseUserAgent(req);
const token = await authManager.generateJWTToken(userRecord.id, {
deviceType: deviceInfo.type,
deviceInfo: deviceInfo.deviceInfo,
@@ -836,6 +902,8 @@ router.get("/oidc/callback", async (req, res) => {
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
res.clearCookie("jwt", authManager.getSecureCookieOptions(req));
return res
.cookie("jwt", token, authManager.getSecureCookieOptions(req, maxAge))
.redirect(redirectUrl.toString());
@@ -862,6 +930,7 @@ router.get("/oidc/callback", async (req, res) => {
// POST /users/login
router.post("/login", async (req, res) => {
const { username, password } = req.body;
const clientIp = req.ip || req.socket.remoteAddress || "unknown";
if (!isNonEmptyString(username) || !isNonEmptyString(password)) {
authLogger.warn("Invalid traditional login attempt", {
@@ -872,6 +941,20 @@ router.post("/login", async (req, res) => {
return res.status(400).json({ error: "Invalid username or password" });
}
const lockStatus = loginRateLimiter.isLocked(clientIp, username);
if (lockStatus.locked) {
authLogger.warn("Login attempt blocked due to rate limiting", {
operation: "user_login_blocked",
username,
ip: clientIp,
remainingTime: lockStatus.remainingTime,
});
return res.status(429).json({
error: "Too many login attempts. Please try again later.",
remainingTime: lockStatus.remainingTime,
});
}
try {
const row = db.$client
.prepare("SELECT value FROM settings WHERE key = 'allow_password_login'")
@@ -896,17 +979,26 @@ router.post("/login", async (req, res) => {
.where(eq(users.username, username));
if (!user || user.length === 0) {
authLogger.warn(`User not found: ${username}`, {
loginRateLimiter.recordFailedAttempt(clientIp, username);
authLogger.warn(`Login failed: user not found`, {
operation: "user_login",
username,
ip: clientIp,
remainingAttempts: loginRateLimiter.getRemainingAttempts(
clientIp,
username,
),
});
return res.status(404).json({ error: "User not found" });
return res.status(401).json({ error: "Invalid username or password" });
}
const userRecord = user[0];
if (userRecord.is_oidc) {
authLogger.warn("OIDC user attempted traditional login", {
if (
userRecord.is_oidc &&
(!userRecord.password_hash || userRecord.password_hash.trim() === "")
) {
authLogger.warn("OIDC-only user attempted traditional login", {
operation: "user_login",
username,
userId: userRecord.id,
@@ -918,12 +1010,18 @@ router.post("/login", async (req, res) => {
const isMatch = await bcrypt.compare(password, userRecord.password_hash);
if (!isMatch) {
authLogger.warn(`Incorrect password for user: ${username}`, {
loginRateLimiter.recordFailedAttempt(clientIp, username);
authLogger.warn(`Login failed: incorrect password`, {
operation: "user_login",
username,
userId: userRecord.id,
ip: clientIp,
remainingAttempts: loginRateLimiter.getRemainingAttempts(
clientIp,
username,
),
});
return res.status(401).json({ error: "Incorrect password" });
return res.status(401).json({ error: "Invalid username or password" });
}
try {
@@ -935,12 +1033,24 @@ router.post("/login", async (req, res) => {
if (kekSalt.length === 0) {
await authManager.registerUser(userRecord.id, password);
}
} catch {}
} catch (error) {}
const deviceInfo = parseUserAgent(req);
let dataUnlocked = false;
if (userRecord.is_oidc) {
dataUnlocked = await authManager.authenticateOIDCUser(
userRecord.id,
deviceInfo.type,
);
} else {
dataUnlocked = await authManager.authenticateUser(
userRecord.id,
password,
deviceInfo.type,
);
}
const dataUnlocked = await authManager.authenticateUser(
userRecord.id,
password,
);
if (!dataUnlocked) {
return res.status(401).json({ error: "Incorrect password" });
}
@@ -957,12 +1067,13 @@ router.post("/login", async (req, res) => {
});
}
const deviceInfo = parseUserAgent(req);
const token = await authManager.generateJWTToken(userRecord.id, {
deviceType: deviceInfo.type,
deviceInfo: deviceInfo.deviceInfo,
});
loginRateLimiter.resetAttempts(clientIp, username);
authLogger.success(`User logged in successfully: ${username}`, {
operation: "user_login_success",
username,
@@ -970,6 +1081,7 @@ router.post("/login", async (req, res) => {
dataUnlocked: true,
deviceType: deviceInfo.type,
deviceInfo: deviceInfo.deviceInfo,
ip: clientIp,
});
const response: Record<string, unknown> = {
@@ -1016,7 +1128,15 @@ router.post("/logout", authenticateJWT, async (req, res) => {
try {
const payload = await authManager.verifyJWTToken(token);
sessionId = payload?.sessionId;
} catch (error) {}
} catch (error) {
authLogger.debug(
"Token verification failed during logout (expected if token expired)",
{
operation: "logout_token_verify_failed",
userId,
},
);
}
}
await authManager.logoutUser(userId, sessionId);
@@ -1052,11 +1172,17 @@ router.get("/me", authenticateJWT, async (req: Request, res: Response) => {
return res.status(401).json({ error: "User not found" });
}
const hasPassword =
user[0].password_hash && user[0].password_hash.trim() !== "";
const hasOidc = user[0].is_oidc && user[0].oidc_identifier;
const isDualAuth = hasPassword && hasOidc;
res.json({
userId: user[0].id,
username: user[0].username,
is_admin: !!user[0].is_admin,
is_oidc: !!user[0].is_oidc,
is_dual_auth: isDualAuth,
totp_enabled: !!user[0].totp_enabled,
});
} catch (err) {
@@ -1610,6 +1736,7 @@ router.get("/list", authenticateJWT, async (req, res) => {
username: users.username,
is_admin: users.is_admin,
is_oidc: users.is_oidc,
password_hash: users.password_hash,
})
.from(users);
@@ -1653,6 +1780,16 @@ router.post("/make-admin", authenticateJWT, async (req, res) => {
.set({ is_admin: true })
.where(eq(users.username, username));
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error("Failed to persist admin promotion to disk", saveError, {
operation: "make_admin_save_failed",
username,
});
}
authLogger.success(
`User ${username} made admin by ${adminUser[0].username}`,
);
@@ -1702,6 +1839,16 @@ router.post("/remove-admin", authenticateJWT, async (req, res) => {
.set({ is_admin: false })
.where(eq(users.username, username));
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error("Failed to persist admin removal to disk", saveError, {
operation: "remove_admin_save_failed",
username,
});
}
authLogger.success(
`Admin status removed from ${username} by ${adminUser[0].username}`,
);
@@ -2106,7 +2253,6 @@ router.delete("/delete-user", authenticateJWT, async (req, res) => {
const targetUserId = targetUser[0].id;
try {
// Delete all user-related data to avoid foreign key constraints
await db
.delete(sshCredentialUsage)
.where(eq(sshCredentialUsage.userId, targetUserId));
@@ -2426,4 +2572,295 @@ router.post("/sessions/revoke-all", authenticateJWT, async (req, res) => {
}
});
// Route: Link OIDC user to existing password account (merge accounts)
// POST /users/link-oidc-to-password
router.post("/link-oidc-to-password", authenticateJWT, async (req, res) => {
const adminUserId = (req as AuthenticatedRequest).userId;
const { oidcUserId, targetUsername } = req.body;
if (!isNonEmptyString(oidcUserId) || !isNonEmptyString(targetUsername)) {
return res.status(400).json({
error: "OIDC user ID and target username are required",
});
}
try {
const adminUser = await db
.select()
.from(users)
.where(eq(users.id, adminUserId));
if (!adminUser || adminUser.length === 0 || !adminUser[0].is_admin) {
return res.status(403).json({ error: "Admin access required" });
}
const oidcUserRecords = await db
.select()
.from(users)
.where(eq(users.id, oidcUserId));
if (!oidcUserRecords || oidcUserRecords.length === 0) {
return res.status(404).json({ error: "OIDC user not found" });
}
const oidcUser = oidcUserRecords[0];
if (!oidcUser.is_oidc) {
return res.status(400).json({
error: "Source user is not an OIDC user",
});
}
const targetUserRecords = await db
.select()
.from(users)
.where(eq(users.username, targetUsername));
if (!targetUserRecords || targetUserRecords.length === 0) {
return res.status(404).json({ error: "Target password user not found" });
}
const targetUser = targetUserRecords[0];
if (targetUser.is_oidc || !targetUser.password_hash) {
return res.status(400).json({
error: "Target user must be a password-based account",
});
}
if (targetUser.client_id && targetUser.oidc_identifier) {
return res.status(400).json({
error: "Target user already has OIDC authentication configured",
});
}
authLogger.info("Linking OIDC user to password account", {
operation: "link_oidc_to_password",
oidcUserId,
oidcUsername: oidcUser.username,
targetUserId: targetUser.id,
targetUsername: targetUser.username,
adminUserId,
});
await db
.update(users)
.set({
is_oidc: true,
oidc_identifier: oidcUser.oidc_identifier,
client_id: oidcUser.client_id,
client_secret: oidcUser.client_secret,
issuer_url: oidcUser.issuer_url,
authorization_url: oidcUser.authorization_url,
token_url: oidcUser.token_url,
identifier_path: oidcUser.identifier_path,
name_path: oidcUser.name_path,
scopes: oidcUser.scopes || "openid email profile",
})
.where(eq(users.id, targetUser.id));
try {
await authManager.convertToOIDCEncryption(targetUser.id);
} catch (encryptionError) {
authLogger.error(
"Failed to convert encryption to OIDC during linking",
encryptionError,
{
operation: "link_convert_encryption_failed",
userId: targetUser.id,
},
);
await db
.update(users)
.set({
is_oidc: false,
oidc_identifier: null,
client_id: "",
client_secret: "",
issuer_url: "",
authorization_url: "",
token_url: "",
identifier_path: "",
name_path: "",
scopes: "openid email profile",
})
.where(eq(users.id, targetUser.id));
return res.status(500).json({
error:
"Failed to convert encryption for dual-auth. Please ensure the password account has encryption setup.",
details:
encryptionError instanceof Error
? encryptionError.message
: "Unknown error",
});
}
await authManager.revokeAllUserSessions(oidcUserId);
authManager.logoutUser(oidcUserId);
await db
.delete(recentActivity)
.where(eq(recentActivity.userId, oidcUserId));
await db.delete(users).where(eq(users.id, oidcUserId));
db.$client
.prepare("DELETE FROM settings WHERE key LIKE ?")
.run(`user_%_${oidcUserId}`);
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error("Failed to persist account linking to disk", saveError, {
operation: "link_oidc_save_failed",
oidcUserId,
targetUserId: targetUser.id,
});
}
authLogger.success(
`OIDC user ${oidcUser.username} linked to password account ${targetUser.username}`,
{
operation: "link_oidc_to_password_success",
oidcUserId,
oidcUsername: oidcUser.username,
targetUserId: targetUser.id,
targetUsername: targetUser.username,
adminUserId,
},
);
res.json({
success: true,
message: `OIDC user ${oidcUser.username} has been linked to ${targetUser.username}. The password account can now use both password and OIDC login.`,
});
} catch (err) {
authLogger.error("Failed to link OIDC user to password account", err, {
operation: "link_oidc_to_password_failed",
oidcUserId,
targetUsername,
adminUserId,
});
res.status(500).json({
error: "Failed to link accounts",
details: err instanceof Error ? err.message : "Unknown error",
});
}
});
// Route: Unlink OIDC from password account (admin only)
// POST /users/unlink-oidc-from-password
router.post("/unlink-oidc-from-password", authenticateJWT, async (req, res) => {
const adminUserId = (req as AuthenticatedRequest).userId;
const { userId } = req.body;
if (!userId) {
return res.status(400).json({
error: "User ID is required",
});
}
try {
const adminUser = await db
.select()
.from(users)
.where(eq(users.id, adminUserId));
if (!adminUser || adminUser.length === 0 || !adminUser[0].is_admin) {
authLogger.warn("Non-admin attempted to unlink OIDC from password", {
operation: "unlink_oidc_unauthorized",
adminUserId,
targetUserId: userId,
});
return res.status(403).json({
error: "Admin privileges required",
});
}
const targetUserRecords = await db
.select()
.from(users)
.where(eq(users.id, userId));
if (!targetUserRecords || targetUserRecords.length === 0) {
return res.status(404).json({
error: "User not found",
});
}
const targetUser = targetUserRecords[0];
if (!targetUser.is_oidc) {
return res.status(400).json({
error: "User does not have OIDC authentication enabled",
});
}
if (!targetUser.password_hash || targetUser.password_hash === "") {
return res.status(400).json({
error:
"Cannot unlink OIDC from a user without password authentication. This would leave the user unable to login.",
});
}
authLogger.info("Unlinking OIDC from password account", {
operation: "unlink_oidc_from_password_start",
targetUserId: targetUser.id,
targetUsername: targetUser.username,
adminUserId,
});
await db
.update(users)
.set({
is_oidc: false,
oidc_identifier: null,
client_id: "",
client_secret: "",
issuer_url: "",
authorization_url: "",
token_url: "",
identifier_path: "",
name_path: "",
scopes: "openid email profile",
})
.where(eq(users.id, targetUser.id));
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
authLogger.error(
"Failed to save database after unlinking OIDC",
saveError,
{
operation: "unlink_oidc_save_failed",
targetUserId: targetUser.id,
},
);
}
authLogger.success("OIDC unlinked from password account successfully", {
operation: "unlink_oidc_from_password_success",
targetUserId: targetUser.id,
targetUsername: targetUser.username,
adminUserId,
});
res.json({
success: true,
message: `OIDC authentication has been removed from ${targetUser.username}. User can now only login with password.`,
});
} catch (err) {
authLogger.error("Failed to unlink OIDC from password account", err, {
operation: "unlink_oidc_from_password_failed",
targetUserId: userId,
adminUserId,
});
res.status(500).json({
error: "Failed to unlink OIDC",
details: err instanceof Error ? err.message : "Unknown error",
});
}
});
export default router;

View File

@@ -6,7 +6,7 @@ import { Client as SSHClient } from "ssh2";
import { getDb } from "../database/db/index.js";
import { sshCredentials, sshData } from "../database/db/schema.js";
import { eq, and } from "drizzle-orm";
import { fileLogger } from "../utils/logger.js";
import { fileLogger, sshLogger } from "../utils/logger.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { AuthManager } from "../utils/auth-manager.js";
import type { AuthenticatedRequest } from "../../types/index.js";
@@ -89,11 +89,179 @@ app.use(express.raw({ limit: "5gb", type: "application/octet-stream" }));
const authManager = AuthManager.getInstance();
app.use(authManager.createAuthMiddleware());
async function resolveJumpHost(
hostId: number,
userId: string,
): Promise<any | null> {
try {
const hosts = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (hosts.length === 0) {
return null;
}
const host = hosts[0];
if (host.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, host.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
return {
...host,
password: credential.password,
key:
credential.private_key || credential.privateKey || credential.key,
keyPassword: credential.key_password || credential.keyPassword,
keyType: credential.key_type || credential.keyType,
authType: credential.auth_type || credential.authType,
};
}
}
return host;
} catch (error) {
fileLogger.error("Failed to resolve jump host", error, {
operation: "resolve_jump_host",
hostId,
userId,
});
return null;
}
}
async function createJumpHostChain(
jumpHosts: Array<{ hostId: number }>,
userId: string,
): Promise<SSHClient | null> {
if (!jumpHosts || jumpHosts.length === 0) {
return null;
}
let currentClient: SSHClient | null = null;
const clients: SSHClient[] = [];
try {
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostConfig = await resolveJumpHost(jumpHosts[i].hostId, userId);
if (!jumpHostConfig) {
fileLogger.error(`Jump host ${i + 1} not found`, undefined, {
operation: "jump_host_chain",
hostId: jumpHosts[i].hostId,
});
clients.forEach((c) => c.end());
return null;
}
const jumpClient = new SSHClient();
clients.push(jumpClient);
const connected = await new Promise<boolean>((resolve) => {
const timeout = setTimeout(() => {
resolve(false);
}, 30000);
jumpClient.on("ready", () => {
clearTimeout(timeout);
resolve(true);
});
jumpClient.on("error", (err) => {
clearTimeout(timeout);
fileLogger.error(`Jump host ${i + 1} connection failed`, err, {
operation: "jump_host_connect",
hostId: jumpHostConfig.id,
ip: jumpHostConfig.ip,
});
resolve(false);
});
const connectConfig: any = {
host: jumpHostConfig.ip,
port: jumpHostConfig.port || 22,
username: jumpHostConfig.username,
tryKeyboard: true,
readyTimeout: 30000,
};
if (jumpHostConfig.authType === "password" && jumpHostConfig.password) {
connectConfig.password = jumpHostConfig.password;
} else if (jumpHostConfig.authType === "key" && jumpHostConfig.key) {
const cleanKey = jumpHostConfig.key
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
connectConfig.privateKey = Buffer.from(cleanKey, "utf8");
if (jumpHostConfig.keyPassword) {
connectConfig.passphrase = jumpHostConfig.keyPassword;
}
}
if (currentClient) {
currentClient.forwardOut(
"127.0.0.1",
0,
jumpHostConfig.ip,
jumpHostConfig.port || 22,
(err, stream) => {
if (err) {
clearTimeout(timeout);
resolve(false);
return;
}
connectConfig.sock = stream;
jumpClient.connect(connectConfig);
},
);
} else {
jumpClient.connect(connectConfig);
}
});
if (!connected) {
clients.forEach((c) => c.end());
return null;
}
currentClient = jumpClient;
}
return currentClient;
} catch (error) {
fileLogger.error("Failed to create jump host chain", error, {
operation: "jump_host_chain",
});
clients.forEach((c) => c.end());
return null;
}
}
interface SSHSession {
client: SSHClient;
isConnected: boolean;
lastActive: number;
timeout?: NodeJS.Timeout;
activeOperations: number;
}
interface PendingTOTPSession {
@@ -118,9 +286,22 @@ const pendingTOTPSessions: Record<string, PendingTOTPSession> = {};
function cleanupSession(sessionId: string) {
const session = sshSessions[sessionId];
if (session) {
if (session.activeOperations > 0) {
fileLogger.warn(
`Deferring session cleanup for ${sessionId} - ${session.activeOperations} active operations`,
{
operation: "cleanup_deferred",
sessionId,
activeOperations: session.activeOperations,
},
);
scheduleSessionCleanup(sessionId);
return;
}
try {
session.client.end();
} catch {}
} catch (error) {}
clearTimeout(session.timeout);
delete sshSessions[sessionId];
}
@@ -174,6 +355,7 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
credentialId,
userProvidedPassword,
forceKeyboardInteractive,
jumpHosts,
} = req.body;
const userId = (req as AuthenticatedRequest).userId;
@@ -393,6 +575,7 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
client,
isConnected: true,
lastActive: Date.now(),
activeOperations: 0,
};
scheduleSessionCleanup(sessionId);
res.json({ status: "success", message: "SSH connection established" });
@@ -625,7 +808,52 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
},
);
client.connect(config);
if (jumpHosts && jumpHosts.length > 0 && userId) {
try {
const jumpClient = await createJumpHostChain(jumpHosts, userId);
if (!jumpClient) {
fileLogger.error("Failed to establish jump host chain", {
operation: "file_jump_chain",
sessionId,
hostId,
});
return res
.status(500)
.json({ error: "Failed to connect through jump hosts" });
}
jumpClient.forwardOut("127.0.0.1", 0, ip, port, (err, stream) => {
if (err) {
fileLogger.error("Failed to forward through jump host", err, {
operation: "file_jump_forward",
sessionId,
hostId,
ip,
port,
});
jumpClient.end();
return res.status(500).json({
error: "Failed to forward through jump host: " + err.message,
});
}
config.sock = stream;
client.connect(config);
});
} catch (error) {
fileLogger.error("Jump host error", error, {
operation: "file_jump_host",
sessionId,
hostId,
});
return res
.status(500)
.json({ error: "Failed to connect through jump hosts" });
}
} else {
client.connect(config);
}
});
app.post("/ssh/file_manager/ssh/connect-totp", async (req, res) => {
@@ -663,7 +891,9 @@ app.post("/ssh/file_manager/ssh/connect-totp", async (req, res) => {
delete pendingTOTPSessions[sessionId];
try {
session.client.end();
} catch {}
} catch (error) {
sshLogger.debug("Operation failed, continuing", { error });
}
fileLogger.warn("TOTP session timeout before code submission", {
operation: "file_totp_verify",
sessionId,
@@ -700,6 +930,7 @@ app.post("/ssh/file_manager/ssh/connect-totp", async (req, res) => {
client: session.client,
isConnected: true,
lastActive: Date.now(),
activeOperations: 0,
};
scheduleSessionCleanup(sessionId);
@@ -843,10 +1074,12 @@ app.get("/ssh/file_manager/ssh/listFiles", (req, res) => {
}
sshConn.lastActive = Date.now();
sshConn.activeOperations++;
const escapedPath = sshPath.replace(/'/g, "'\"'\"'");
sshConn.client.exec(`ls -la '${escapedPath}'`, (err, stream) => {
sshConn.client.exec(`command ls -la '${escapedPath}'`, (err, stream) => {
if (err) {
sshConn.activeOperations--;
fileLogger.error("SSH listFiles error:", err);
return res.status(500).json({ error: err.message });
}
@@ -863,6 +1096,7 @@ app.get("/ssh/file_manager/ssh/listFiles", (req, res) => {
});
stream.on("close", (code) => {
sshConn.activeOperations--;
if (code !== 0) {
fileLogger.error(
`SSH listFiles command failed with code ${code}: ${errorData.replace(/\n/g, " ").trim()}`,
@@ -2486,6 +2720,516 @@ app.post("/ssh/file_manager/ssh/executeFile", async (req, res) => {
});
});
app.post("/ssh/file_manager/ssh/changePermissions", async (req, res) => {
const { sessionId, path, permissions } = req.body;
const sshConn = sshSessions[sessionId];
if (!sshConn || !sshConn.isConnected) {
fileLogger.error(
"SSH connection not found or not connected for changePermissions",
{
operation: "change_permissions",
sessionId,
hasConnection: !!sshConn,
isConnected: sshConn?.isConnected,
},
);
return res.status(400).json({ error: "SSH connection not available" });
}
if (!path) {
return res.status(400).json({ error: "File path is required" });
}
if (!permissions || !/^\d{3,4}$/.test(permissions)) {
return res.status(400).json({
error: "Valid permissions required (e.g., 755, 644)",
});
}
sshConn.lastActive = Date.now();
scheduleSessionCleanup(sessionId);
const octalPerms = permissions.slice(-3);
const escapedPath = path.replace(/'/g, "'\"'\"'");
const command = `chmod ${octalPerms} '${escapedPath}' && echo "SUCCESS"`;
fileLogger.info("Changing file permissions", {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
const commandTimeout = setTimeout(() => {
if (!res.headersSent) {
fileLogger.error("changePermissions command timeout", {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
res.status(408).json({
error: "Permission change timed out. SSH connection may be unstable.",
});
}
}, 10000);
sshConn.client.exec(command, (err, stream) => {
if (err) {
clearTimeout(commandTimeout);
fileLogger.error("SSH changePermissions exec error:", err, {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
if (!res.headersSent) {
return res.status(500).json({ error: "Failed to change permissions" });
}
return;
}
let outputData = "";
let errorOutput = "";
stream.on("data", (chunk: Buffer) => {
outputData += chunk.toString();
});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
});
stream.on("close", (code) => {
clearTimeout(commandTimeout);
if (outputData.includes("SUCCESS")) {
fileLogger.success("File permissions changed successfully", {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
if (!res.headersSent) {
res.json({
success: true,
message: "Permissions changed successfully",
});
}
return;
}
if (code !== 0) {
fileLogger.error("chmod command failed", {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
exitCode: code,
error: errorOutput,
});
if (!res.headersSent) {
return res.status(500).json({
error: errorOutput || "Failed to change permissions",
});
}
return;
}
fileLogger.success("File permissions changed successfully", {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
if (!res.headersSent) {
res.json({
success: true,
message: "Permissions changed successfully",
});
}
});
stream.on("error", (streamErr) => {
clearTimeout(commandTimeout);
fileLogger.error("SSH changePermissions stream error:", streamErr, {
operation: "change_permissions",
sessionId,
path,
permissions: octalPerms,
});
if (!res.headersSent) {
res
.status(500)
.json({ error: "Stream error while changing permissions" });
}
});
});
});
// Route: Extract archive file (requires JWT)
// POST /ssh/file_manager/ssh/extractArchive
app.post("/ssh/file_manager/ssh/extractArchive", async (req, res) => {
const { sessionId, archivePath, extractPath } = req.body;
if (!sessionId || !archivePath) {
return res.status(400).json({ error: "Missing required parameters" });
}
const session = sshSessions[sessionId];
if (!session || !session.isConnected) {
return res.status(400).json({ error: "SSH session not connected" });
}
session.lastActive = Date.now();
scheduleSessionCleanup(sessionId);
const fileName = archivePath.split("/").pop() || "";
const fileExt = fileName.toLowerCase();
let extractCommand = "";
const targetPath =
extractPath || archivePath.substring(0, archivePath.lastIndexOf("/"));
if (fileExt.endsWith(".tar.gz") || fileExt.endsWith(".tgz")) {
extractCommand = `tar -xzf "${archivePath}" -C "${targetPath}"`;
} else if (fileExt.endsWith(".tar.bz2") || fileExt.endsWith(".tbz2")) {
extractCommand = `tar -xjf "${archivePath}" -C "${targetPath}"`;
} else if (fileExt.endsWith(".tar.xz")) {
extractCommand = `tar -xJf "${archivePath}" -C "${targetPath}"`;
} else if (fileExt.endsWith(".tar")) {
extractCommand = `tar -xf "${archivePath}" -C "${targetPath}"`;
} else if (fileExt.endsWith(".zip")) {
extractCommand = `unzip -o "${archivePath}" -d "${targetPath}"`;
} else if (fileExt.endsWith(".gz") && !fileExt.endsWith(".tar.gz")) {
extractCommand = `gunzip -c "${archivePath}" > "${archivePath.replace(/\.gz$/, "")}"`;
} else if (fileExt.endsWith(".bz2") && !fileExt.endsWith(".tar.bz2")) {
extractCommand = `bunzip2 -k "${archivePath}"`;
} else if (fileExt.endsWith(".xz") && !fileExt.endsWith(".tar.xz")) {
extractCommand = `unxz -k "${archivePath}"`;
} else if (fileExt.endsWith(".7z")) {
extractCommand = `7z x "${archivePath}" -o"${targetPath}"`;
} else if (fileExt.endsWith(".rar")) {
extractCommand = `unrar x "${archivePath}" "${targetPath}/"`;
} else {
return res.status(400).json({ error: "Unsupported archive format" });
}
fileLogger.info("Extracting archive", {
operation: "extract_archive",
sessionId,
archivePath,
extractPath: targetPath,
command: extractCommand,
});
session.client.exec(extractCommand, (err, stream) => {
if (err) {
fileLogger.error("SSH exec error during extract:", err, {
operation: "extract_archive",
sessionId,
archivePath,
});
return res
.status(500)
.json({ error: "Failed to execute extract command" });
}
let errorOutput = "";
stream.on("data", (data: Buffer) => {
fileLogger.debug("Extract stdout", {
operation: "extract_archive",
sessionId,
output: data.toString(),
});
});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
fileLogger.debug("Extract stderr", {
operation: "extract_archive",
sessionId,
error: data.toString(),
});
});
stream.on("close", (code: number) => {
if (code !== 0) {
fileLogger.error("Extract command failed", {
operation: "extract_archive",
sessionId,
archivePath,
exitCode: code,
error: errorOutput,
});
let friendlyError = errorOutput || "Failed to extract archive";
if (
errorOutput.includes("command not found") ||
errorOutput.includes("not found")
) {
let missingCmd = "";
let installHint = "";
if (fileExt.endsWith(".zip")) {
missingCmd = "unzip";
installHint =
"apt install unzip / yum install unzip / brew install unzip";
} else if (
fileExt.endsWith(".tar.gz") ||
fileExt.endsWith(".tgz") ||
fileExt.endsWith(".tar.bz2") ||
fileExt.endsWith(".tbz2") ||
fileExt.endsWith(".tar.xz") ||
fileExt.endsWith(".tar")
) {
missingCmd = "tar";
installHint = "Usually pre-installed on Linux/Unix systems";
} else if (fileExt.endsWith(".gz")) {
missingCmd = "gunzip";
installHint =
"apt install gzip / yum install gzip / Usually pre-installed";
} else if (fileExt.endsWith(".bz2")) {
missingCmd = "bunzip2";
installHint =
"apt install bzip2 / yum install bzip2 / brew install bzip2";
} else if (fileExt.endsWith(".xz")) {
missingCmd = "unxz";
installHint =
"apt install xz-utils / yum install xz / brew install xz";
} else if (fileExt.endsWith(".7z")) {
missingCmd = "7z";
installHint =
"apt install p7zip-full / yum install p7zip / brew install p7zip";
} else if (fileExt.endsWith(".rar")) {
missingCmd = "unrar";
installHint =
"apt install unrar / yum install unrar / brew install unrar";
}
if (missingCmd) {
friendlyError = `Command '${missingCmd}' not found on remote server. Please install it first: ${installHint}`;
}
}
return res.status(500).json({ error: friendlyError });
}
fileLogger.success("Archive extracted successfully", {
operation: "extract_archive",
sessionId,
archivePath,
extractPath: targetPath,
});
res.json({
success: true,
message: "Archive extracted successfully",
extractPath: targetPath,
});
});
stream.on("error", (streamErr) => {
fileLogger.error("SSH extractArchive stream error:", streamErr, {
operation: "extract_archive",
sessionId,
archivePath,
});
if (!res.headersSent) {
res
.status(500)
.json({ error: "Stream error while extracting archive" });
}
});
});
});
// Route: Compress files/folders (requires JWT)
// POST /ssh/file_manager/ssh/compressFiles
app.post("/ssh/file_manager/ssh/compressFiles", async (req, res) => {
const { sessionId, paths, archiveName, format } = req.body;
if (
!sessionId ||
!paths ||
!Array.isArray(paths) ||
paths.length === 0 ||
!archiveName
) {
return res.status(400).json({ error: "Missing required parameters" });
}
const session = sshSessions[sessionId];
if (!session || !session.isConnected) {
return res.status(400).json({ error: "SSH session not connected" });
}
session.lastActive = Date.now();
scheduleSessionCleanup(sessionId);
const compressionFormat = format || "zip";
let compressCommand = "";
const firstPath = paths[0];
const workingDir = firstPath.substring(0, firstPath.lastIndexOf("/")) || "/";
const fileNames = paths
.map((p) => {
const name = p.split("/").pop();
return `"${name}"`;
})
.join(" ");
let archivePath = "";
if (archiveName.includes("/")) {
archivePath = archiveName;
} else {
archivePath = workingDir.endsWith("/")
? `${workingDir}${archiveName}`
: `${workingDir}/${archiveName}`;
}
if (compressionFormat === "zip") {
compressCommand = `cd "${workingDir}" && zip -r "${archivePath}" ${fileNames}`;
} else if (compressionFormat === "tar.gz" || compressionFormat === "tgz") {
compressCommand = `cd "${workingDir}" && tar -czf "${archivePath}" ${fileNames}`;
} else if (compressionFormat === "tar.bz2" || compressionFormat === "tbz2") {
compressCommand = `cd "${workingDir}" && tar -cjf "${archivePath}" ${fileNames}`;
} else if (compressionFormat === "tar.xz") {
compressCommand = `cd "${workingDir}" && tar -cJf "${archivePath}" ${fileNames}`;
} else if (compressionFormat === "tar") {
compressCommand = `cd "${workingDir}" && tar -cf "${archivePath}" ${fileNames}`;
} else if (compressionFormat === "7z") {
compressCommand = `cd "${workingDir}" && 7z a "${archivePath}" ${fileNames}`;
} else {
return res.status(400).json({ error: "Unsupported compression format" });
}
fileLogger.info("Compressing files", {
operation: "compress_files",
sessionId,
paths,
archivePath,
format: compressionFormat,
command: compressCommand,
});
session.client.exec(compressCommand, (err, stream) => {
if (err) {
fileLogger.error("SSH exec error during compress:", err, {
operation: "compress_files",
sessionId,
paths,
});
return res
.status(500)
.json({ error: "Failed to execute compress command" });
}
let errorOutput = "";
stream.on("data", (data: Buffer) => {
fileLogger.debug("Compress stdout", {
operation: "compress_files",
sessionId,
output: data.toString(),
});
});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
fileLogger.debug("Compress stderr", {
operation: "compress_files",
sessionId,
error: data.toString(),
});
});
stream.on("close", (code: number) => {
if (code !== 0) {
fileLogger.error("Compress command failed", {
operation: "compress_files",
sessionId,
paths,
archivePath,
exitCode: code,
error: errorOutput,
});
let friendlyError = errorOutput || "Failed to compress files";
if (
errorOutput.includes("command not found") ||
errorOutput.includes("not found")
) {
const commandMap: Record<string, { cmd: string; install: string }> = {
zip: {
cmd: "zip",
install: "apt install zip / yum install zip / brew install zip",
},
"tar.gz": {
cmd: "tar",
install: "Usually pre-installed on Linux/Unix systems",
},
"tar.bz2": {
cmd: "tar",
install: "Usually pre-installed on Linux/Unix systems",
},
"tar.xz": {
cmd: "tar",
install: "Usually pre-installed on Linux/Unix systems",
},
tar: {
cmd: "tar",
install: "Usually pre-installed on Linux/Unix systems",
},
"7z": {
cmd: "7z",
install:
"apt install p7zip-full / yum install p7zip / brew install p7zip",
},
};
const info = commandMap[compressionFormat];
if (info) {
friendlyError = `Command '${info.cmd}' not found on remote server. Please install it first: ${info.install}`;
}
}
return res.status(500).json({ error: friendlyError });
}
fileLogger.success("Files compressed successfully", {
operation: "compress_files",
sessionId,
paths,
archivePath,
format: compressionFormat,
});
res.json({
success: true,
message: "Files compressed successfully",
archivePath: archivePath,
});
});
stream.on("error", (streamErr) => {
fileLogger.error("SSH compressFiles stream error:", streamErr, {
operation: "compress_files",
sessionId,
paths,
});
if (!res.headersSent) {
res.status(500).json({ error: "Stream error while compressing files" });
}
});
});
});
process.on("SIGINT", () => {
Object.keys(sshSessions).forEach(cleanupSession);
process.exit(0);

View File

@@ -6,10 +6,185 @@ import { Client, type ConnectConfig } from "ssh2";
import { getDb } from "../database/db/index.js";
import { sshData, sshCredentials } from "../database/db/schema.js";
import { eq, and } from "drizzle-orm";
import { statsLogger } from "../utils/logger.js";
import { statsLogger, sshLogger } from "../utils/logger.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { AuthManager } from "../utils/auth-manager.js";
import type { AuthenticatedRequest } from "../../types/index.js";
import { collectCpuMetrics } from "./widgets/cpu-collector.js";
import { collectMemoryMetrics } from "./widgets/memory-collector.js";
import { collectDiskMetrics } from "./widgets/disk-collector.js";
import { collectNetworkMetrics } from "./widgets/network-collector.js";
import { collectUptimeMetrics } from "./widgets/uptime-collector.js";
import { collectProcessesMetrics } from "./widgets/processes-collector.js";
import { collectSystemMetrics } from "./widgets/system-collector.js";
import { collectLoginStats } from "./widgets/login-stats-collector.js";
async function resolveJumpHost(
hostId: number,
userId: string,
): Promise<any | null> {
try {
const hosts = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (hosts.length === 0) {
return null;
}
const host = hosts[0];
if (host.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, host.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
return {
...host,
password: credential.password,
key:
credential.private_key || credential.privateKey || credential.key,
keyPassword: credential.key_password || credential.keyPassword,
keyType: credential.key_type || credential.keyType,
authType: credential.auth_type || credential.authType,
};
}
}
return host;
} catch (error) {
statsLogger.error("Failed to resolve jump host", error, {
operation: "resolve_jump_host",
hostId,
userId,
});
return null;
}
}
async function createJumpHostChain(
jumpHosts: Array<{ hostId: number }>,
userId: string,
): Promise<Client | null> {
if (!jumpHosts || jumpHosts.length === 0) {
return null;
}
let currentClient: Client | null = null;
const clients: Client[] = [];
try {
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostConfig = await resolveJumpHost(jumpHosts[i].hostId, userId);
if (!jumpHostConfig) {
statsLogger.error(`Jump host ${i + 1} not found`, undefined, {
operation: "jump_host_chain",
hostId: jumpHosts[i].hostId,
});
clients.forEach((c) => c.end());
return null;
}
const jumpClient = new Client();
clients.push(jumpClient);
const connected = await new Promise<boolean>((resolve) => {
const timeout = setTimeout(() => {
resolve(false);
}, 30000);
jumpClient.on("ready", () => {
clearTimeout(timeout);
resolve(true);
});
jumpClient.on("error", (err) => {
clearTimeout(timeout);
statsLogger.error(`Jump host ${i + 1} connection failed`, err, {
operation: "jump_host_connect",
hostId: jumpHostConfig.id,
ip: jumpHostConfig.ip,
});
resolve(false);
});
const connectConfig: any = {
host: jumpHostConfig.ip,
port: jumpHostConfig.port || 22,
username: jumpHostConfig.username,
tryKeyboard: true,
readyTimeout: 30000,
};
if (jumpHostConfig.authType === "password" && jumpHostConfig.password) {
connectConfig.password = jumpHostConfig.password;
} else if (jumpHostConfig.authType === "key" && jumpHostConfig.key) {
const cleanKey = jumpHostConfig.key
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
connectConfig.privateKey = Buffer.from(cleanKey, "utf8");
if (jumpHostConfig.keyPassword) {
connectConfig.passphrase = jumpHostConfig.keyPassword;
}
}
if (currentClient) {
currentClient.forwardOut(
"127.0.0.1",
0,
jumpHostConfig.ip,
jumpHostConfig.port || 22,
(err, stream) => {
if (err) {
clearTimeout(timeout);
resolve(false);
return;
}
connectConfig.sock = stream;
jumpClient.connect(connectConfig);
},
);
} else {
jumpClient.connect(connectConfig);
}
});
if (!connected) {
clients.forEach((c) => c.end());
return null;
}
currentClient = jumpClient;
}
return currentClient;
} catch (error) {
statsLogger.error("Failed to create jump host chain", error, {
operation: "jump_host_chain",
});
clients.forEach((c) => c.end());
return null;
}
}
interface PooledConnection {
client: Client;
@@ -79,7 +254,7 @@ class SSHConnectionPool {
private async createConnection(
host: SSHHostWithCredentials,
): Promise<Client> {
return new Promise((resolve, reject) => {
return new Promise(async (resolve, reject) => {
const client = new Client();
const timeout = setTimeout(() => {
client.end();
@@ -120,7 +295,12 @@ class SSHConnectionPool {
),
);
} else if (host.password) {
const responses = prompts.map(() => host.password || "");
const responses = prompts.map((p) => {
if (/password/i.test(p.prompt)) {
return host.password || "";
}
return "";
});
finish(responses);
} else {
finish(prompts.map(() => ""));
@@ -129,7 +309,44 @@ class SSHConnectionPool {
);
try {
client.connect(buildSshConfig(host));
const config = buildSshConfig(host);
if (host.jumpHosts && host.jumpHosts.length > 0 && host.userId) {
const jumpClient = await createJumpHostChain(
host.jumpHosts,
host.userId,
);
if (!jumpClient) {
clearTimeout(timeout);
reject(new Error("Failed to establish jump host chain"));
return;
}
jumpClient.forwardOut(
"127.0.0.1",
0,
host.ip,
host.port,
(err, stream) => {
if (err) {
clearTimeout(timeout);
jumpClient.end();
reject(
new Error(
"Failed to forward through jump host: " + err.message,
),
);
return;
}
config.sock = stream;
client.connect(config);
},
);
} else {
client.connect(config);
}
} catch (err) {
clearTimeout(timeout);
reject(err);
@@ -156,7 +373,7 @@ class SSHConnectionPool {
if (!conn.inUse && now - conn.lastUsed > maxAge) {
try {
conn.client.end();
} catch {}
} catch (error) {}
return false;
}
return true;
@@ -176,7 +393,7 @@ class SSHConnectionPool {
for (const conn of connections) {
try {
conn.client.end();
} catch {}
} catch (error) {}
}
}
this.connections.clear();
@@ -214,7 +431,7 @@ class RequestQueue {
if (request) {
try {
await request();
} catch {}
} catch (error) {}
}
}
@@ -382,7 +599,8 @@ interface SSHHostWithCredentials {
enableFileManager: boolean;
defaultPath: string;
tunnelConnections: unknown[];
statsConfig?: string;
jumpHosts?: Array<{ hostId: number }>;
statsConfig?: string | StatsConfig;
createdAt: string;
updatedAt: string;
userId: string;
@@ -427,34 +645,64 @@ class PollingManager {
}
>();
parseStatsConfig(statsConfigStr?: string): StatsConfig {
parseStatsConfig(statsConfigStr?: string | StatsConfig): StatsConfig {
if (!statsConfigStr) {
return DEFAULT_STATS_CONFIG;
}
try {
const parsed = JSON.parse(statsConfigStr);
return { ...DEFAULT_STATS_CONFIG, ...parsed };
} catch (error) {
statsLogger.warn(
`Failed to parse statsConfig: ${error instanceof Error ? error.message : "Unknown error"}`,
);
return DEFAULT_STATS_CONFIG;
let parsed: StatsConfig;
if (typeof statsConfigStr === "object") {
parsed = statsConfigStr;
} else {
try {
let temp: any = JSON.parse(statsConfigStr);
if (typeof temp === "string") {
temp = JSON.parse(temp);
}
parsed = temp;
} catch (error) {
statsLogger.warn(
`Failed to parse statsConfig: ${error instanceof Error ? error.message : "Unknown error"}`,
{
operation: "parse_stats_config_error",
statsConfigStr,
},
);
return DEFAULT_STATS_CONFIG;
}
}
const result = { ...DEFAULT_STATS_CONFIG, ...parsed };
return result;
}
async startPollingForHost(host: SSHHostWithCredentials): Promise<void> {
const statsConfig = this.parseStatsConfig(host.statsConfig);
const existingConfig = this.pollingConfigs.get(host.id);
if (existingConfig) {
if (existingConfig.statusTimer) {
clearInterval(existingConfig.statusTimer);
existingConfig.statusTimer = undefined;
}
if (existingConfig.metricsTimer) {
clearInterval(existingConfig.metricsTimer);
existingConfig.metricsTimer = undefined;
}
}
if (!statsConfig.statusCheckEnabled && !statsConfig.metricsEnabled) {
this.pollingConfigs.delete(host.id);
this.statusStore.delete(host.id);
this.metricsStore.delete(host.id);
return;
}
const config: HostPollingConfig = {
host,
statsConfig,
@@ -466,7 +714,10 @@ class PollingManager {
this.pollHostStatus(host);
config.statusTimer = setInterval(() => {
this.pollHostStatus(host);
const latestConfig = this.pollingConfigs.get(host.id);
if (latestConfig && latestConfig.statsConfig.statusCheckEnabled) {
this.pollHostStatus(latestConfig.host);
}
}, intervalMs);
} else {
this.statusStore.delete(host.id);
@@ -478,7 +729,10 @@ class PollingManager {
this.pollHostMetrics(host);
config.metricsTimer = setInterval(() => {
this.pollHostMetrics(host);
const latestConfig = this.pollingConfigs.get(host.id);
if (latestConfig && latestConfig.statsConfig.metricsEnabled) {
this.pollHostMetrics(latestConfig.host);
}
}, intervalMs);
} else {
this.metricsStore.delete(host.id);
@@ -505,27 +759,51 @@ class PollingManager {
}
private async pollHostMetrics(host: SSHHostWithCredentials): Promise<void> {
const config = this.pollingConfigs.get(host.id);
if (!config || !config.statsConfig.metricsEnabled) {
return;
}
const currentHost = config.host;
try {
const metrics = await collectMetrics(host);
this.metricsStore.set(host.id, {
const metrics = await collectMetrics(currentHost);
this.metricsStore.set(currentHost.id, {
data: metrics,
timestamp: Date.now(),
});
} catch (error) {}
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error);
const latestConfig = this.pollingConfigs.get(currentHost.id);
if (latestConfig && latestConfig.statsConfig.metricsEnabled) {
statsLogger.warn("Failed to collect metrics for host", {
operation: "metrics_poll_failed",
hostId: currentHost.id,
hostName: currentHost.name,
error: errorMessage,
});
}
}
}
stopPollingForHost(hostId: number): void {
stopPollingForHost(hostId: number, clearData = true): void {
const config = this.pollingConfigs.get(hostId);
if (config) {
if (config.statusTimer) {
clearInterval(config.statusTimer);
config.statusTimer = undefined;
}
if (config.metricsTimer) {
clearInterval(config.metricsTimer);
config.metricsTimer = undefined;
}
this.pollingConfigs.delete(hostId);
this.statusStore.delete(hostId);
this.metricsStore.delete(hostId);
if (clearData) {
this.statusStore.delete(hostId);
this.metricsStore.delete(hostId);
}
}
}
@@ -554,11 +832,23 @@ class PollingManager {
}
async refreshHostPolling(userId: string): Promise<void> {
const hosts = await fetchAllHosts(userId);
const currentHostIds = new Set(hosts.map((h) => h.id));
for (const hostId of this.pollingConfigs.keys()) {
this.stopPollingForHost(hostId);
this.stopPollingForHost(hostId, false);
}
await this.initializePolling(userId);
for (const hostId of this.statusStore.keys()) {
if (!currentHostIds.has(hostId)) {
this.statusStore.delete(hostId);
this.metricsStore.delete(hostId);
}
}
for (const host of hosts) {
await this.startPollingForHost(host);
}
}
destroy(): void {
@@ -712,6 +1002,7 @@ async function resolveHostCredentials(
tunnelConnections: host.tunnelConnections
? JSON.parse(host.tunnelConnections as string)
: [],
jumpHosts: host.jumpHosts ? JSON.parse(host.jumpHosts as string) : [],
statsConfig: host.statsConfig || undefined,
createdAt: host.createdAt,
updatedAt: host.updatedAt,
@@ -911,59 +1202,6 @@ async function withSshConnection<T>(
}
}
function execCommand(
client: Client,
command: string,
): Promise<{
stdout: string;
stderr: string;
code: number | null;
}> {
return new Promise((resolve, reject) => {
client.exec(command, { pty: false }, (err, stream) => {
if (err) return reject(err);
let stdout = "";
let stderr = "";
let exitCode: number | null = null;
stream
.on("close", (code: number | undefined) => {
exitCode = typeof code === "number" ? code : null;
resolve({ stdout, stderr, code: exitCode });
})
.on("data", (data: Buffer) => {
stdout += data.toString("utf8");
})
.stderr.on("data", (data: Buffer) => {
stderr += data.toString("utf8");
});
});
});
}
function parseCpuLine(
cpuLine: string,
): { total: number; idle: number } | undefined {
const parts = cpuLine.trim().split(/\s+/);
if (parts[0] !== "cpu") return undefined;
const nums = parts
.slice(1)
.map((n) => Number(n))
.filter((n) => Number.isFinite(n));
if (nums.length < 4) return undefined;
const idle = (nums[3] ?? 0) + (nums[4] ?? 0);
const total = nums.reduce((a, b) => a + b, 0);
return { total, idle };
}
function toFixedNum(n: number | null | undefined, digits = 2): number | null {
if (typeof n !== "number" || !Number.isFinite(n)) return null;
return Number(n.toFixed(digits));
}
function kibToGiB(kib: number): number {
return kib / (1024 * 1024);
}
async function collectMetrics(host: SSHHostWithCredentials): Promise<{
cpu: {
percent: number | null;
@@ -1026,298 +1264,38 @@ async function collectMetrics(host: SSHHostWithCredentials): Promise<{
return requestQueue.queueRequest(host.id, async () => {
try {
return await withSshConnection(host, async (client) => {
let cpuPercent: number | null = null;
let cores: number | null = null;
let loadTriplet: [number, number, number] | null = null;
const cpu = await collectCpuMetrics(client);
const memory = await collectMemoryMetrics(client);
const disk = await collectDiskMetrics(client);
const network = await collectNetworkMetrics(client);
const uptime = await collectUptimeMetrics(client);
const processes = await collectProcessesMetrics(client);
const system = await collectSystemMetrics(client);
let login_stats = {
recentLogins: [],
failedLogins: [],
totalLogins: 0,
uniqueIPs: 0,
};
try {
const [stat1, loadAvgOut, coresOut] = await Promise.all([
execCommand(client, "cat /proc/stat"),
execCommand(client, "cat /proc/loadavg"),
execCommand(
client,
"nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo",
),
]);
await new Promise((r) => setTimeout(r, 500));
const stat2 = await execCommand(client, "cat /proc/stat");
const cpuLine1 = (
stat1.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const cpuLine2 = (
stat2.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const a = parseCpuLine(cpuLine1);
const b = parseCpuLine(cpuLine2);
if (a && b) {
const totalDiff = b.total - a.total;
const idleDiff = b.idle - a.idle;
const used = totalDiff - idleDiff;
if (totalDiff > 0)
cpuPercent = Math.max(0, Math.min(100, (used / totalDiff) * 100));
}
const laParts = loadAvgOut.stdout.trim().split(/\s+/);
if (laParts.length >= 3) {
loadTriplet = [
Number(laParts[0]),
Number(laParts[1]),
Number(laParts[2]),
].map((v) => (Number.isFinite(v) ? Number(v) : 0)) as [
number,
number,
number,
];
}
const coresNum = Number((coresOut.stdout || "").trim());
cores = Number.isFinite(coresNum) && coresNum > 0 ? coresNum : null;
login_stats = await collectLoginStats(client);
} catch (e) {
cpuPercent = null;
cores = null;
loadTriplet = null;
statsLogger.debug("Failed to collect login stats", {
operation: "login_stats_failed",
error: e instanceof Error ? e.message : String(e),
});
}
let memPercent: number | null = null;
let usedGiB: number | null = null;
let totalGiB: number | null = null;
try {
const memInfo = await execCommand(client, "cat /proc/meminfo");
const lines = memInfo.stdout.split("\n");
const getVal = (key: string) => {
const line = lines.find((l) => l.startsWith(key));
if (!line) return null;
const m = line.match(/\d+/);
return m ? Number(m[0]) : null;
};
const totalKb = getVal("MemTotal:");
const availKb = getVal("MemAvailable:");
if (totalKb && availKb && totalKb > 0) {
const usedKb = totalKb - availKb;
memPercent = Math.max(0, Math.min(100, (usedKb / totalKb) * 100));
usedGiB = kibToGiB(usedKb);
totalGiB = kibToGiB(totalKb);
}
} catch (e) {
memPercent = null;
usedGiB = null;
totalGiB = null;
}
let diskPercent: number | null = null;
let usedHuman: string | null = null;
let totalHuman: string | null = null;
let availableHuman: string | null = null;
try {
const [diskOutHuman, diskOutBytes] = await Promise.all([
execCommand(client, "df -h -P / | tail -n +2"),
execCommand(client, "df -B1 -P / | tail -n +2"),
]);
const humanLine =
diskOutHuman.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const bytesLine =
diskOutBytes.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const humanParts = humanLine.split(/\s+/);
const bytesParts = bytesLine.split(/\s+/);
if (humanParts.length >= 6 && bytesParts.length >= 6) {
totalHuman = humanParts[1] || null;
usedHuman = humanParts[2] || null;
availableHuman = humanParts[3] || null;
const totalBytes = Number(bytesParts[1]);
const usedBytes = Number(bytesParts[2]);
if (
Number.isFinite(totalBytes) &&
Number.isFinite(usedBytes) &&
totalBytes > 0
) {
diskPercent = Math.max(
0,
Math.min(100, (usedBytes / totalBytes) * 100),
);
}
}
} catch (e) {
diskPercent = null;
usedHuman = null;
totalHuman = null;
availableHuman = null;
}
const interfaces: Array<{
name: string;
ip: string;
state: string;
rxBytes: string | null;
txBytes: string | null;
}> = [];
try {
const ifconfigOut = await execCommand(
client,
"ip -o addr show | awk '{print $2,$4}' | grep -v '^lo'",
);
const netStatOut = await execCommand(
client,
"ip -o link show | awk '{gsub(/:/, \"\", $2); print $2,$9}'",
);
const addrs = ifconfigOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const states = netStatOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const ifMap = new Map<string, { ip: string; state: string }>();
for (const line of addrs) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const ip = parts[1].split("/")[0];
if (!ifMap.has(name)) ifMap.set(name, { ip, state: "UNKNOWN" });
}
}
for (const line of states) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const state = parts[1];
const existing = ifMap.get(name);
if (existing) {
existing.state = state;
}
}
}
for (const [name, data] of ifMap.entries()) {
interfaces.push({
name,
ip: data.ip,
state: data.state,
rxBytes: null,
txBytes: null,
});
}
} catch (e) {}
let uptimeSeconds: number | null = null;
let uptimeFormatted: string | null = null;
try {
const uptimeOut = await execCommand(client, "cat /proc/uptime");
const uptimeParts = uptimeOut.stdout.trim().split(/\s+/);
if (uptimeParts.length >= 1) {
uptimeSeconds = Number(uptimeParts[0]);
if (Number.isFinite(uptimeSeconds)) {
const days = Math.floor(uptimeSeconds / 86400);
const hours = Math.floor((uptimeSeconds % 86400) / 3600);
const minutes = Math.floor((uptimeSeconds % 3600) / 60);
uptimeFormatted = `${days}d ${hours}h ${minutes}m`;
}
}
} catch (e) {}
let totalProcesses: number | null = null;
let runningProcesses: number | null = null;
const topProcesses: Array<{
pid: string;
user: string;
cpu: string;
mem: string;
command: string;
}> = [];
try {
const psOut = await execCommand(
client,
"ps aux --sort=-%cpu | head -n 11",
);
const psLines = psOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
if (psLines.length > 1) {
for (let i = 1; i < Math.min(psLines.length, 11); i++) {
const parts = psLines[i].split(/\s+/);
if (parts.length >= 11) {
topProcesses.push({
pid: parts[1],
user: parts[0],
cpu: parts[2],
mem: parts[3],
command: parts.slice(10).join(" ").substring(0, 50),
});
}
}
}
const procCount = await execCommand(client, "ps aux | wc -l");
const runningCount = await execCommand(
client,
"ps aux | grep -c ' R '",
);
totalProcesses = Number(procCount.stdout.trim()) - 1;
runningProcesses = Number(runningCount.stdout.trim());
} catch (e) {}
let hostname: string | null = null;
let kernel: string | null = null;
let os: string | null = null;
try {
const hostnameOut = await execCommand(client, "hostname");
const kernelOut = await execCommand(client, "uname -r");
const osOut = await execCommand(
client,
"cat /etc/os-release | grep '^PRETTY_NAME=' | cut -d'\"' -f2",
);
hostname = hostnameOut.stdout.trim() || null;
kernel = kernelOut.stdout.trim() || null;
os = osOut.stdout.trim() || null;
} catch (e) {}
const result = {
cpu: { percent: toFixedNum(cpuPercent, 0), cores, load: loadTriplet },
memory: {
percent: toFixedNum(memPercent, 0),
usedGiB: usedGiB ? toFixedNum(usedGiB, 2) : null,
totalGiB: totalGiB ? toFixedNum(totalGiB, 2) : null,
},
disk: {
percent: toFixedNum(diskPercent, 0),
usedHuman,
totalHuman,
availableHuman,
},
network: {
interfaces,
},
uptime: {
seconds: uptimeSeconds,
formatted: uptimeFormatted,
},
processes: {
total: totalProcesses,
running: runningProcesses,
top: topProcesses,
},
system: {
hostname,
kernel,
os,
},
cpu,
memory,
disk,
network,
uptime,
processes,
system,
login_stats,
};
metricsCache.set(host.id, result);
@@ -1365,7 +1343,7 @@ function tcpPing(
settled = true;
try {
socket.destroy();
} catch {}
} catch (error) {}
resolve(result);
};
@@ -1438,6 +1416,67 @@ app.post("/refresh", async (req, res) => {
res.json({ message: "Polling refreshed" });
});
app.post("/host-updated", async (req, res) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.body;
if (!SimpleDBOps.isUserDataUnlocked(userId)) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
if (!hostId || typeof hostId !== "number") {
return res.status(400).json({ error: "Invalid hostId" });
}
try {
const host = await fetchHostById(hostId, userId);
if (host) {
await pollingManager.startPollingForHost(host);
res.json({ message: "Host polling started" });
} else {
res.status(404).json({ error: "Host not found" });
}
} catch (error) {
statsLogger.error("Failed to start polling for host", error, {
operation: "host_updated",
hostId,
userId,
});
res.status(500).json({ error: "Failed to start polling" });
}
});
app.post("/host-deleted", async (req, res) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.body;
if (!SimpleDBOps.isUserDataUnlocked(userId)) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
if (!hostId || typeof hostId !== "number") {
return res.status(400).json({ error: "Invalid hostId" });
}
try {
pollingManager.stopPollingForHost(hostId, true);
res.json({ message: "Host polling stopped" });
} catch (error) {
statsLogger.error("Failed to stop polling for host", error, {
operation: "host_deleted",
hostId,
userId,
});
res.status(500).json({ error: "Failed to stop polling" });
}
});
app.get("/metrics/:id", validateHostId, async (req, res) => {
const id = Number(req.params.id);
const userId = (req as AuthenticatedRequest).userId;

View File

@@ -31,6 +31,7 @@ interface ConnectToHostData {
credentialId?: number;
userId?: string;
forceKeyboardInteractive?: boolean;
jumpHosts?: Array<{ hostId: number }>;
};
initialPath?: string;
executeCommand?: string;
@@ -57,6 +58,173 @@ const userCrypto = UserCrypto.getInstance();
const userConnections = new Map<string, Set<WebSocket>>();
async function resolveJumpHost(
hostId: number,
userId: string,
): Promise<any | null> {
try {
const hosts = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (hosts.length === 0) {
return null;
}
const host = hosts[0];
if (host.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, host.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
return {
...host,
password: credential.password,
key:
credential.private_key || credential.privateKey || credential.key,
keyPassword: credential.key_password || credential.keyPassword,
keyType: credential.key_type || credential.keyType,
authType: credential.auth_type || credential.authType,
};
}
}
return host;
} catch (error) {
sshLogger.error("Failed to resolve jump host", error, {
operation: "resolve_jump_host",
hostId,
userId,
});
return null;
}
}
async function createJumpHostChain(
jumpHosts: Array<{ hostId: number }>,
userId: string,
): Promise<Client | null> {
if (!jumpHosts || jumpHosts.length === 0) {
return null;
}
let currentClient: Client | null = null;
const clients: Client[] = [];
try {
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostConfig = await resolveJumpHost(jumpHosts[i].hostId, userId);
if (!jumpHostConfig) {
sshLogger.error(`Jump host ${i + 1} not found`, undefined, {
operation: "jump_host_chain",
hostId: jumpHosts[i].hostId,
});
clients.forEach((c) => c.end());
return null;
}
const jumpClient = new Client();
clients.push(jumpClient);
const connected = await new Promise<boolean>((resolve) => {
const timeout = setTimeout(() => {
resolve(false);
}, 30000);
jumpClient.on("ready", () => {
clearTimeout(timeout);
resolve(true);
});
jumpClient.on("error", (err) => {
clearTimeout(timeout);
sshLogger.error(`Jump host ${i + 1} connection failed`, err, {
operation: "jump_host_connect",
hostId: jumpHostConfig.id,
ip: jumpHostConfig.ip,
});
resolve(false);
});
const connectConfig: any = {
host: jumpHostConfig.ip,
port: jumpHostConfig.port || 22,
username: jumpHostConfig.username,
tryKeyboard: true,
readyTimeout: 30000,
};
if (jumpHostConfig.authType === "password" && jumpHostConfig.password) {
connectConfig.password = jumpHostConfig.password;
} else if (jumpHostConfig.authType === "key" && jumpHostConfig.key) {
const cleanKey = jumpHostConfig.key
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
connectConfig.privateKey = Buffer.from(cleanKey, "utf8");
if (jumpHostConfig.keyPassword) {
connectConfig.passphrase = jumpHostConfig.keyPassword;
}
}
if (currentClient) {
currentClient.forwardOut(
"127.0.0.1",
0,
jumpHostConfig.ip,
jumpHostConfig.port || 22,
(err, stream) => {
if (err) {
clearTimeout(timeout);
resolve(false);
return;
}
connectConfig.sock = stream;
jumpClient.connect(connectConfig);
},
);
} else {
jumpClient.connect(connectConfig);
}
});
if (!connected) {
clients.forEach((c) => c.end());
return null;
}
currentClient = jumpClient;
}
return currentClient;
} catch (error) {
sshLogger.error("Failed to create jump host chain", error, {
operation: "jump_host_chain",
});
clients.forEach((c) => c.end());
return null;
}
}
const wss = new WebSocketServer({
port: 30002,
verifyClient: async (info) => {
@@ -79,6 +247,7 @@ const wss = new WebSocketServer({
}
const existingConnections = userConnections.get(payload.userId);
if (existingConnections && existingConnections.size >= 3) {
return false;
}
@@ -152,6 +321,10 @@ wss.on("connection", async (ws: WebSocket, req) => {
let totpPromptSent = false;
let isKeyboardInteractive = false;
let keyboardInteractiveResponded = false;
let isConnecting = false;
let isConnected = false;
let isCleaningUp = false;
let isShellInitializing = false;
ws.on("close", () => {
const userWs = userConnections.get(userId);
@@ -417,10 +590,21 @@ wss.on("connection", async (ws: WebSocket, req) => {
return;
}
if (isConnecting || isConnected) {
sshLogger.warn("Connection already in progress or established", {
operation: "ssh_connect",
hostId: id,
isConnecting,
isConnected,
});
return;
}
isConnecting = true;
sshConn = new Client();
const connectionTimeout = setTimeout(() => {
if (sshConn) {
if (sshConn && isConnecting && !isConnected) {
sshLogger.error("SSH connection timeout", undefined, {
operation: "ssh_connect",
hostId: id,
@@ -433,7 +617,7 @@ wss.on("connection", async (ws: WebSocket, req) => {
);
cleanupSSH(connectionTimeout);
}
}, 60000);
}, 120000);
let resolvedCredentials = { password, key, keyPassword, keyType, authType };
let authMethodNotAvailable = false;
@@ -498,7 +682,9 @@ wss.on("connection", async (ws: WebSocket, req) => {
sshConn.on("ready", () => {
clearTimeout(connectionTimeout);
if (!sshConn) {
const conn = sshConn;
if (!conn || isCleaningUp || !sshConn) {
sshLogger.warn(
"SSH connection was cleaned up before shell could be created",
{
@@ -507,6 +693,9 @@ wss.on("connection", async (ws: WebSocket, req) => {
ip,
port,
username,
isCleaningUp,
connNull: !conn,
sshConnNull: !sshConn,
},
);
ws.send(
@@ -519,13 +708,37 @@ wss.on("connection", async (ws: WebSocket, req) => {
return;
}
sshConn.shell(
isShellInitializing = true;
isConnecting = false;
isConnected = true;
if (!sshConn) {
sshLogger.error(
"SSH connection became null right before shell creation",
{
operation: "ssh_shell",
hostId: id,
},
);
ws.send(
JSON.stringify({
type: "error",
message: "SSH connection lost during setup",
}),
);
isShellInitializing = false;
return;
}
conn.shell(
{
rows: data.rows,
cols: data.cols,
term: "xterm-256color",
} as PseudoTtyOptions,
(err, stream) => {
isShellInitializing = false;
if (err) {
sshLogger.error("Shell error", err, {
operation: "ssh_shell",
@@ -836,9 +1049,10 @@ wss.on("connection", async (ws: WebSocket, req) => {
tryKeyboard: true,
keepaliveInterval: 30000,
keepaliveCountMax: 3,
readyTimeout: 60000,
readyTimeout: 120000,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
timeout: 120000,
env: {
TERM: "xterm-256color",
LANG: "en_US.UTF-8",
@@ -969,7 +1183,68 @@ wss.on("connection", async (ws: WebSocket, req) => {
return;
}
sshConn.connect(connectConfig);
if (
hostConfig.jumpHosts &&
hostConfig.jumpHosts.length > 0 &&
hostConfig.userId
) {
try {
const jumpClient = await createJumpHostChain(
hostConfig.jumpHosts,
hostConfig.userId,
);
if (!jumpClient) {
sshLogger.error("Failed to establish jump host chain");
ws.send(
JSON.stringify({
type: "error",
message: "Failed to connect through jump hosts",
}),
);
cleanupSSH(connectionTimeout);
return;
}
jumpClient.forwardOut("127.0.0.1", 0, ip, port, (err, stream) => {
if (err) {
sshLogger.error("Failed to forward through jump host", err, {
operation: "ssh_jump_forward",
hostId: id,
ip,
port,
});
ws.send(
JSON.stringify({
type: "error",
message: "Failed to forward through jump host: " + err.message,
}),
);
jumpClient.end();
cleanupSSH(connectionTimeout);
return;
}
connectConfig.sock = stream;
sshConn.connect(connectConfig);
});
} catch (error) {
sshLogger.error("Jump host error", error, {
operation: "ssh_jump_host",
hostId: id,
});
ws.send(
JSON.stringify({
type: "error",
message: "Failed to connect through jump hosts",
}),
);
cleanupSSH(connectionTimeout);
return;
}
} else {
sshConn.connect(connectConfig);
}
}
function handleResize(data: ResizeData) {
@@ -982,6 +1257,24 @@ wss.on("connection", async (ws: WebSocket, req) => {
}
function cleanupSSH(timeoutId?: NodeJS.Timeout) {
if (isCleaningUp) {
return;
}
if (isShellInitializing) {
sshLogger.warn(
"Cleanup attempted during shell initialization, deferring",
{
operation: "cleanup_deferred",
userId,
},
);
setTimeout(() => cleanupSSH(timeoutId), 100);
return;
}
isCleaningUp = true;
if (timeoutId) {
clearTimeout(timeoutId);
}
@@ -1019,6 +1312,12 @@ wss.on("connection", async (ws: WebSocket, req) => {
isKeyboardInteractive = false;
keyboardInteractiveResponded = false;
keyboardInteractiveFinish = null;
isConnecting = false;
isConnected = false;
setTimeout(() => {
isCleaningUp = false;
}, 100);
}
function setupPingInterval() {
@@ -1033,7 +1332,12 @@ wss.on("connection", async (ws: WebSocket, req) => {
);
cleanupSSH();
}
} else if (!sshConn || !sshStream) {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
}
}, 60000);
}, 30000);
}
});

View File

@@ -15,7 +15,7 @@ import type {
ErrorType,
} from "../../types/index.js";
import { CONNECTION_STATES } from "../../types/index.js";
import { tunnelLogger } from "../utils/logger.js";
import { tunnelLogger, sshLogger } from "../utils/logger.js";
import { SystemCrypto } from "../utils/system-crypto.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { DataCrypto } from "../utils/data-crypto.js";
@@ -217,7 +217,7 @@ function cleanupTunnelResources(
if (verification?.timeout) clearTimeout(verification.timeout);
try {
verification?.conn.end();
} catch {}
} catch (error) {}
tunnelVerifications.delete(tunnelName);
}
@@ -282,7 +282,7 @@ function handleDisconnect(
const verification = tunnelVerifications.get(tunnelName);
if (verification?.timeout) clearTimeout(verification.timeout);
verification?.conn.end();
} catch {}
} catch (error) {}
tunnelVerifications.delete(tunnelName);
}
@@ -638,7 +638,7 @@ async function connectSSHTunnel(
try {
conn.end();
} catch {}
} catch (error) {}
activeTunnels.delete(tunnelName);
@@ -778,7 +778,7 @@ async function connectSSHTunnel(
const verification = tunnelVerifications.get(tunnelName);
if (verification?.timeout) clearTimeout(verification.timeout);
verification?.conn.end();
} catch {}
} catch (error) {}
tunnelVerifications.delete(tunnelName);
}

View File

@@ -0,0 +1,42 @@
import type { Client } from "ssh2";
export function execCommand(
client: Client,
command: string,
): Promise<{
stdout: string;
stderr: string;
code: number | null;
}> {
return new Promise((resolve, reject) => {
client.exec(command, { pty: false }, (err, stream) => {
if (err) return reject(err);
let stdout = "";
let stderr = "";
let exitCode: number | null = null;
stream
.on("close", (code: number | undefined) => {
exitCode = typeof code === "number" ? code : null;
resolve({ stdout, stderr, code: exitCode });
})
.on("data", (data: Buffer) => {
stdout += data.toString("utf8");
})
.stderr.on("data", (data: Buffer) => {
stderr += data.toString("utf8");
});
});
});
}
export function toFixedNum(
n: number | null | undefined,
digits = 2,
): number | null {
if (typeof n !== "number" || !Number.isFinite(n)) return null;
return Number(n.toFixed(digits));
}
export function kibToGiB(kib: number): number {
return kib / (1024 * 1024);
}

View File

@@ -0,0 +1,83 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum } from "./common-utils.js";
function parseCpuLine(
cpuLine: string,
): { total: number; idle: number } | undefined {
const parts = cpuLine.trim().split(/\s+/);
if (parts[0] !== "cpu") return undefined;
const nums = parts
.slice(1)
.map((n) => Number(n))
.filter((n) => Number.isFinite(n));
if (nums.length < 4) return undefined;
const idle = (nums[3] ?? 0) + (nums[4] ?? 0);
const total = nums.reduce((a, b) => a + b, 0);
return { total, idle };
}
export async function collectCpuMetrics(client: Client): Promise<{
percent: number | null;
cores: number | null;
load: [number, number, number] | null;
}> {
let cpuPercent: number | null = null;
let cores: number | null = null;
let loadTriplet: [number, number, number] | null = null;
try {
const [stat1, loadAvgOut, coresOut] = await Promise.all([
execCommand(client, "cat /proc/stat"),
execCommand(client, "cat /proc/loadavg"),
execCommand(
client,
"nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo",
),
]);
await new Promise((r) => setTimeout(r, 500));
const stat2 = await execCommand(client, "cat /proc/stat");
const cpuLine1 = (
stat1.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const cpuLine2 = (
stat2.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const a = parseCpuLine(cpuLine1);
const b = parseCpuLine(cpuLine2);
if (a && b) {
const totalDiff = b.total - a.total;
const idleDiff = b.idle - a.idle;
const used = totalDiff - idleDiff;
if (totalDiff > 0)
cpuPercent = Math.max(0, Math.min(100, (used / totalDiff) * 100));
}
const laParts = loadAvgOut.stdout.trim().split(/\s+/);
if (laParts.length >= 3) {
loadTriplet = [
Number(laParts[0]),
Number(laParts[1]),
Number(laParts[2]),
].map((v) => (Number.isFinite(v) ? Number(v) : 0)) as [
number,
number,
number,
];
}
const coresNum = Number((coresOut.stdout || "").trim());
cores = Number.isFinite(coresNum) && coresNum > 0 ? coresNum : null;
} catch (e) {
cpuPercent = null;
cores = null;
loadTriplet = null;
}
return {
percent: toFixedNum(cpuPercent, 0),
cores,
load: loadTriplet,
};
}

View File

@@ -0,0 +1,67 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum } from "./common-utils.js";
export async function collectDiskMetrics(client: Client): Promise<{
percent: number | null;
usedHuman: string | null;
totalHuman: string | null;
availableHuman: string | null;
}> {
let diskPercent: number | null = null;
let usedHuman: string | null = null;
let totalHuman: string | null = null;
let availableHuman: string | null = null;
try {
const [diskOutHuman, diskOutBytes] = await Promise.all([
execCommand(client, "df -h -P / | tail -n +2"),
execCommand(client, "df -B1 -P / | tail -n +2"),
]);
const humanLine =
diskOutHuman.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const bytesLine =
diskOutBytes.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const humanParts = humanLine.split(/\s+/);
const bytesParts = bytesLine.split(/\s+/);
if (humanParts.length >= 6 && bytesParts.length >= 6) {
totalHuman = humanParts[1] || null;
usedHuman = humanParts[2] || null;
availableHuman = humanParts[3] || null;
const totalBytes = Number(bytesParts[1]);
const usedBytes = Number(bytesParts[2]);
if (
Number.isFinite(totalBytes) &&
Number.isFinite(usedBytes) &&
totalBytes > 0
) {
diskPercent = Math.max(
0,
Math.min(100, (usedBytes / totalBytes) * 100),
);
}
}
} catch (e) {
diskPercent = null;
usedHuman = null;
totalHuman = null;
availableHuman = null;
}
return {
percent: toFixedNum(diskPercent, 0),
usedHuman,
totalHuman,
availableHuman,
};
}

View File

@@ -0,0 +1,122 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
export interface LoginRecord {
user: string;
ip: string;
time: string;
status: "success" | "failed";
}
export interface LoginStats {
recentLogins: LoginRecord[];
failedLogins: LoginRecord[];
totalLogins: number;
uniqueIPs: number;
}
export async function collectLoginStats(client: Client): Promise<LoginStats> {
const recentLogins: LoginRecord[] = [];
const failedLogins: LoginRecord[] = [];
const ipSet = new Set<string>();
try {
const lastOut = await execCommand(
client,
"last -n 20 -F -w | grep -v 'reboot' | grep -v 'wtmp' | head -20",
);
const lastLines = lastOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
for (const line of lastLines) {
const parts = line.split(/\s+/);
if (parts.length >= 10) {
const user = parts[0];
const tty = parts[1];
const ip =
parts[2] === ":" || parts[2].startsWith(":") ? "local" : parts[2];
const timeStart = parts.indexOf(
parts.find((p) => /^(Mon|Tue|Wed|Thu|Fri|Sat|Sun)/.test(p)) || "",
);
if (timeStart > 0 && parts.length > timeStart + 4) {
const timeStr = parts.slice(timeStart, timeStart + 5).join(" ");
if (user && user !== "wtmp" && tty !== "system") {
recentLogins.push({
user,
ip,
time: new Date(timeStr).toISOString(),
status: "success",
});
if (ip !== "local") {
ipSet.add(ip);
}
}
}
}
}
} catch (e) {
// Ignore errors
}
try {
const failedOut = await execCommand(
client,
"grep 'Failed password' /var/log/auth.log 2>/dev/null | tail -10 || grep 'authentication failure' /var/log/secure 2>/dev/null | tail -10 || echo ''",
);
const failedLines = failedOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
for (const line of failedLines) {
let user = "unknown";
let ip = "unknown";
let timeStr = "";
const userMatch = line.match(/for (?:invalid user )?(\S+)/);
if (userMatch) {
user = userMatch[1];
}
const ipMatch = line.match(/from (\d+\.\d+\.\d+\.\d+)/);
if (ipMatch) {
ip = ipMatch[1];
}
const dateMatch = line.match(/^(\w+\s+\d+\s+\d+:\d+:\d+)/);
if (dateMatch) {
const currentYear = new Date().getFullYear();
timeStr = `${currentYear} ${dateMatch[1]}`;
}
if (user && ip) {
failedLogins.push({
user,
ip,
time: timeStr
? new Date(timeStr).toISOString()
: new Date().toISOString(),
status: "failed",
});
if (ip !== "unknown") {
ipSet.add(ip);
}
}
}
} catch (e) {
// Ignore errors
}
return {
recentLogins: recentLogins.slice(0, 10),
failedLogins: failedLogins.slice(0, 10),
totalLogins: recentLogins.length,
uniqueIPs: ipSet.size,
};
}

View File

@@ -0,0 +1,41 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum, kibToGiB } from "./common-utils.js";
export async function collectMemoryMetrics(client: Client): Promise<{
percent: number | null;
usedGiB: number | null;
totalGiB: number | null;
}> {
let memPercent: number | null = null;
let usedGiB: number | null = null;
let totalGiB: number | null = null;
try {
const memInfo = await execCommand(client, "cat /proc/meminfo");
const lines = memInfo.stdout.split("\n");
const getVal = (key: string) => {
const line = lines.find((l) => l.startsWith(key));
if (!line) return null;
const m = line.match(/\d+/);
return m ? Number(m[0]) : null;
};
const totalKb = getVal("MemTotal:");
const availKb = getVal("MemAvailable:");
if (totalKb && availKb && totalKb > 0) {
const usedKb = totalKb - availKb;
memPercent = Math.max(0, Math.min(100, (usedKb / totalKb) * 100));
usedGiB = kibToGiB(usedKb);
totalGiB = kibToGiB(totalKb);
}
} catch (e) {
memPercent = null;
usedGiB = null;
totalGiB = null;
}
return {
percent: toFixedNum(memPercent, 0),
usedGiB: usedGiB ? toFixedNum(usedGiB, 2) : null,
totalGiB: totalGiB ? toFixedNum(totalGiB, 2) : null,
};
}

View File

@@ -0,0 +1,79 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectNetworkMetrics(client: Client): Promise<{
interfaces: Array<{
name: string;
ip: string;
state: string;
rxBytes: string | null;
txBytes: string | null;
}>;
}> {
const interfaces: Array<{
name: string;
ip: string;
state: string;
rxBytes: string | null;
txBytes: string | null;
}> = [];
try {
const ifconfigOut = await execCommand(
client,
"ip -o addr show | awk '{print $2,$4}' | grep -v '^lo'",
);
const netStatOut = await execCommand(
client,
"ip -o link show | awk '{gsub(/:/, \"\", $2); print $2,$9}'",
);
const addrs = ifconfigOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const states = netStatOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const ifMap = new Map<string, { ip: string; state: string }>();
for (const line of addrs) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const ip = parts[1].split("/")[0];
if (!ifMap.has(name)) ifMap.set(name, { ip, state: "UNKNOWN" });
}
}
for (const line of states) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const state = parts[1];
const existing = ifMap.get(name);
if (existing) {
existing.state = state;
}
}
}
for (const [name, data] of ifMap.entries()) {
interfaces.push({
name,
ip: data.ip,
state: data.state,
rxBytes: null,
txBytes: null,
});
}
} catch (e) {
statsLogger.debug("Failed to collect network interface stats", {
operation: "network_stats_failed",
error: e instanceof Error ? e.message : String(e),
});
}
return { interfaces };
}

View File

@@ -0,0 +1,63 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectProcessesMetrics(client: Client): Promise<{
total: number | null;
running: number | null;
top: Array<{
pid: string;
user: string;
cpu: string;
mem: string;
command: string;
}>;
}> {
let totalProcesses: number | null = null;
let runningProcesses: number | null = null;
const topProcesses: Array<{
pid: string;
user: string;
cpu: string;
mem: string;
command: string;
}> = [];
try {
const psOut = await execCommand(client, "ps aux --sort=-%cpu | head -n 11");
const psLines = psOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
if (psLines.length > 1) {
for (let i = 1; i < Math.min(psLines.length, 11); i++) {
const parts = psLines[i].split(/\s+/);
if (parts.length >= 11) {
topProcesses.push({
pid: parts[1],
user: parts[0],
cpu: parts[2],
mem: parts[3],
command: parts.slice(10).join(" ").substring(0, 50),
});
}
}
}
const procCount = await execCommand(client, "ps aux | wc -l");
const runningCount = await execCommand(client, "ps aux | grep -c ' R '");
totalProcesses = Number(procCount.stdout.trim()) - 1;
runningProcesses = Number(runningCount.stdout.trim());
} catch (e) {
statsLogger.debug("Failed to collect process stats", {
operation: "process_stats_failed",
error: e instanceof Error ? e.message : String(e),
});
}
return {
total: totalProcesses,
running: runningProcesses,
top: topProcesses,
};
}

View File

@@ -0,0 +1,37 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectSystemMetrics(client: Client): Promise<{
hostname: string | null;
kernel: string | null;
os: string | null;
}> {
let hostname: string | null = null;
let kernel: string | null = null;
let os: string | null = null;
try {
const hostnameOut = await execCommand(client, "hostname");
const kernelOut = await execCommand(client, "uname -r");
const osOut = await execCommand(
client,
"cat /etc/os-release | grep '^PRETTY_NAME=' | cut -d'\"' -f2",
);
hostname = hostnameOut.stdout.trim() || null;
kernel = kernelOut.stdout.trim() || null;
os = osOut.stdout.trim() || null;
} catch (e) {
statsLogger.debug("Failed to collect system info", {
operation: "system_info_failed",
error: e instanceof Error ? e.message : String(e),
});
}
return {
hostname,
kernel,
os,
};
}

View File

@@ -0,0 +1,35 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectUptimeMetrics(client: Client): Promise<{
seconds: number | null;
formatted: string | null;
}> {
let uptimeSeconds: number | null = null;
let uptimeFormatted: string | null = null;
try {
const uptimeOut = await execCommand(client, "cat /proc/uptime");
const uptimeParts = uptimeOut.stdout.trim().split(/\s+/);
if (uptimeParts.length >= 1) {
uptimeSeconds = Number(uptimeParts[0]);
if (Number.isFinite(uptimeSeconds)) {
const days = Math.floor(uptimeSeconds / 86400);
const hours = Math.floor((uptimeSeconds % 86400) / 3600);
const minutes = Math.floor((uptimeSeconds % 3600) / 60);
uptimeFormatted = `${days}d ${hours}h ${minutes}m`;
}
}
} catch (e) {
statsLogger.debug("Failed to collect uptime", {
operation: "uptime_failed",
error: e instanceof Error ? e.message : String(e),
});
}
return {
seconds: uptimeSeconds,
formatted: uptimeFormatted,
};
}

View File

@@ -21,7 +21,7 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
if (persistentConfig.parsed) {
Object.assign(process.env, persistentConfig.parsed);
}
} catch {}
} catch (error) {}
let version = "unknown";

View File

@@ -85,24 +85,25 @@ class AuthManager {
await this.userCrypto.setupUserEncryption(userId, password);
}
async registerOIDCUser(userId: string): Promise<void> {
await this.userCrypto.setupOIDCUserEncryption(userId);
async registerOIDCUser(
userId: string,
sessionDurationMs: number,
): Promise<void> {
await this.userCrypto.setupOIDCUserEncryption(userId, sessionDurationMs);
}
async authenticateOIDCUser(userId: string): Promise<boolean> {
const authenticated = await this.userCrypto.authenticateOIDCUser(userId);
async authenticateOIDCUser(
userId: string,
deviceType?: DeviceType,
): Promise<boolean> {
const sessionDurationMs =
deviceType === "desktop" || deviceType === "mobile"
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
if (authenticated) {
await this.performLazyEncryptionMigration(userId);
}
return authenticated;
}
async authenticateUser(userId: string, password: string): Promise<boolean> {
const authenticated = await this.userCrypto.authenticateUser(
const authenticated = await this.userCrypto.authenticateOIDCUser(
userId,
password,
sessionDurationMs,
);
if (authenticated) {
@@ -112,6 +113,33 @@ class AuthManager {
return authenticated;
}
async authenticateUser(
userId: string,
password: string,
deviceType?: DeviceType,
): Promise<boolean> {
const sessionDurationMs =
deviceType === "desktop" || deviceType === "mobile"
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
const authenticated = await this.userCrypto.authenticateUser(
userId,
password,
sessionDurationMs,
);
if (authenticated) {
await this.performLazyEncryptionMigration(userId);
}
return authenticated;
}
async convertToOIDCEncryption(userId: string): Promise<void> {
await this.userCrypto.convertToOIDCEncryption(userId);
}
private async performLazyEncryptionMigration(userId: string): Promise<void> {
try {
const userDataKey = this.getUserDataKey(userId);

View File

@@ -233,7 +233,7 @@ IP.3 = 0.0.0.0
let envContent = "";
try {
envContent = await fs.readFile(this.ENV_FILE, "utf8");
} catch {}
} catch (error) {}
let updatedContent = envContent;
let hasChanges = false;

View File

@@ -12,6 +12,7 @@ interface EncryptedFileMetadata {
algorithm: string;
keySource?: string;
salt?: string;
dataSize?: number;
}
class DatabaseFileEncryption {
@@ -25,11 +26,12 @@ class DatabaseFileEncryption {
buffer: Buffer,
targetPath: string,
): Promise<string> {
const tmpPath = `${targetPath}.tmp-${Date.now()}-${process.pid}`;
const metadataPath = `${targetPath}${this.METADATA_FILE_SUFFIX}`;
try {
const key = await this.systemCrypto.getDatabaseKey();
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv(
this.ALGORITHM,
key,
@@ -45,14 +47,55 @@ class DatabaseFileEncryption {
fingerprint: "termix-v2-systemcrypto",
algorithm: this.ALGORITHM,
keySource: "SystemCrypto",
dataSize: encrypted.length,
};
const metadataPath = `${targetPath}${this.METADATA_FILE_SUFFIX}`;
fs.writeFileSync(targetPath, encrypted);
fs.writeFileSync(metadataPath, JSON.stringify(metadata, null, 2));
const metadataJson = JSON.stringify(metadata, null, 2);
const metadataBuffer = Buffer.from(metadataJson, "utf8");
const metadataLengthBuffer = Buffer.alloc(4);
metadataLengthBuffer.writeUInt32BE(metadataBuffer.length, 0);
const finalBuffer = Buffer.concat([
metadataLengthBuffer,
metadataBuffer,
encrypted,
]);
fs.writeFileSync(tmpPath, finalBuffer);
fs.renameSync(tmpPath, targetPath);
try {
if (fs.existsSync(metadataPath)) {
fs.unlinkSync(metadataPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup old metadata file", {
operation: "old_meta_cleanup_failed",
path: metadataPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
return targetPath;
} catch (error) {
try {
if (fs.existsSync(tmpPath)) {
fs.unlinkSync(tmpPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup temporary files", {
operation: "temp_file_cleanup_failed",
tmpPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
databaseLogger.error("Failed to encrypt database buffer", error, {
operation: "database_buffer_encryption_failed",
targetPath,
@@ -74,6 +117,8 @@ class DatabaseFileEncryption {
const encryptedPath =
targetPath || `${sourcePath}${this.ENCRYPTED_FILE_SUFFIX}`;
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
const tmpPath = `${encryptedPath}.tmp-${Date.now()}-${process.pid}`;
const tmpMetadataPath = `${tmpPath}${this.METADATA_FILE_SUFFIX}`;
try {
const sourceData = fs.readFileSync(sourcePath);
@@ -93,6 +138,12 @@ class DatabaseFileEncryption {
]);
const tag = cipher.getAuthTag();
const keyFingerprint = crypto
.createHash("sha256")
.update(key)
.digest("hex")
.substring(0, 16);
const metadata: EncryptedFileMetadata = {
iv: iv.toString("hex"),
tag: tag.toString("hex"),
@@ -100,10 +151,14 @@ class DatabaseFileEncryption {
fingerprint: "termix-v2-systemcrypto",
algorithm: this.ALGORITHM,
keySource: "SystemCrypto",
dataSize: encrypted.length,
};
fs.writeFileSync(encryptedPath, encrypted);
fs.writeFileSync(metadataPath, JSON.stringify(metadata, null, 2));
fs.writeFileSync(tmpPath, encrypted);
fs.writeFileSync(tmpMetadataPath, JSON.stringify(metadata, null, 2));
fs.renameSync(tmpPath, encryptedPath);
fs.renameSync(tmpMetadataPath, metadataPath);
databaseLogger.info("Database file encrypted successfully", {
operation: "database_file_encryption",
@@ -111,11 +166,30 @@ class DatabaseFileEncryption {
encryptedPath,
fileSize: sourceData.length,
encryptedSize: encrypted.length,
keyFingerprint,
fingerprintPrefix: metadata.fingerprint,
});
return encryptedPath;
} catch (error) {
try {
if (fs.existsSync(tmpPath)) {
fs.unlinkSync(tmpPath);
}
if (fs.existsSync(tmpMetadataPath)) {
fs.unlinkSync(tmpMetadataPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup temporary files", {
operation: "temp_file_cleanup_failed",
tmpPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
databaseLogger.error("Failed to encrypt database file", error, {
operation: "database_file_encryption_failed",
sourcePath,
@@ -134,16 +208,69 @@ class DatabaseFileEncryption {
);
}
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(metadataPath)) {
throw new Error(`Metadata file does not exist: ${metadataPath}`);
let metadata: EncryptedFileMetadata;
let encryptedData: Buffer;
const fileBuffer = fs.readFileSync(encryptedPath);
try {
const metadataLength = fileBuffer.readUInt32BE(0);
const metadataEnd = 4 + metadataLength;
if (
metadataLength <= 0 ||
metadataEnd > fileBuffer.length ||
metadataEnd <= 4
) {
throw new Error("Invalid metadata length in single-file format");
}
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
metadata = JSON.parse(metadataJson);
encryptedData = fileBuffer.slice(metadataEnd);
if (!metadata.iv || !metadata.tag || !metadata.version) {
throw new Error("Invalid metadata structure in single-file format");
}
} catch (singleFileError) {
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(metadataPath)) {
throw new Error(
`Could not read database: Not a valid single-file format and metadata file is missing: ${metadataPath}. Error: ${singleFileError.message}`,
);
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
metadata = JSON.parse(metadataContent);
encryptedData = fileBuffer;
} catch (twoFileError) {
throw new Error(
`Failed to read database using both single-file and two-file formats. Error: ${twoFileError.message}`,
);
}
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
const encryptedData = fs.readFileSync(encryptedPath);
if (
metadata.dataSize !== undefined &&
encryptedData.length !== metadata.dataSize
) {
databaseLogger.error(
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
null,
{
operation: "database_file_size_mismatch",
encryptedPath,
actualSize: encryptedData.length,
expectedSize: metadata.dataSize,
},
);
throw new Error(
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
`This indicates corrupted files or interrupted write operation.`,
);
}
let key: Buffer;
if (metadata.version === "v2") {
@@ -181,13 +308,67 @@ class DatabaseFileEncryption {
return decryptedBuffer;
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const isAuthError =
errorMessage.includes("Unsupported state") ||
errorMessage.includes("authenticate data") ||
errorMessage.includes("auth");
if (isAuthError) {
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
let envFileExists = false;
let envFileReadable = false;
try {
envFileExists = fs.existsSync(envPath);
if (envFileExists) {
fs.accessSync(envPath, fs.constants.R_OK);
envFileReadable = true;
}
} catch (error) {
databaseLogger.debug("Operation failed, continuing", {
error: error instanceof Error ? error.message : String(error),
});
}
databaseLogger.error(
"Database decryption authentication failed - possible causes: wrong DATABASE_KEY, corrupted files, or interrupted write",
error,
{
operation: "database_buffer_decryption_auth_failed",
encryptedPath,
dataDir,
envPath,
envFileExists,
envFileReadable,
hasEnvKey: !!process.env.DATABASE_KEY,
envKeyLength: process.env.DATABASE_KEY?.length || 0,
suggestion:
"Check if DATABASE_KEY in .env matches the key used for encryption",
},
);
throw new Error(
`Database decryption authentication failed. This usually means:\n` +
`1. DATABASE_KEY has changed or is missing from ${dataDir}/.env\n` +
`2. Encrypted file was corrupted during write (system crash/restart)\n` +
`3. Metadata file does not match encrypted data\n` +
`\nDebug info:\n` +
`- DATA_DIR: ${dataDir}\n` +
`- .env file exists: ${envFileExists}\n` +
`- .env file readable: ${envFileReadable}\n` +
`- DATABASE_KEY in environment: ${!!process.env.DATABASE_KEY}\n` +
`Original error: ${errorMessage}`,
);
}
databaseLogger.error("Failed to decrypt database to buffer", error, {
operation: "database_buffer_decryption_failed",
encryptedPath,
errorMessage,
});
throw new Error(
`Database buffer decryption failed: ${error instanceof Error ? error.message : "Unknown error"}`,
);
throw new Error(`Database buffer decryption failed: ${errorMessage}`);
}
}
@@ -215,6 +396,26 @@ class DatabaseFileEncryption {
const encryptedData = fs.readFileSync(encryptedPath);
if (
metadata.dataSize !== undefined &&
encryptedData.length !== metadata.dataSize
) {
databaseLogger.error(
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
null,
{
operation: "database_file_size_mismatch",
encryptedPath,
actualSize: encryptedData.length,
expectedSize: metadata.dataSize,
},
);
throw new Error(
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
`This indicates corrupted files or interrupted write operation.`,
);
}
let key: Buffer;
if (metadata.version === "v2") {
key = await this.systemCrypto.getDatabaseKey();
@@ -274,18 +475,43 @@ class DatabaseFileEncryption {
}
static isEncryptedDatabaseFile(filePath: string): boolean {
const metadataPath = `${filePath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(filePath) || !fs.existsSync(metadataPath)) {
if (!fs.existsSync(filePath)) {
return false;
}
const metadataPath = `${filePath}${this.METADATA_FILE_SUFFIX}`;
if (fs.existsSync(metadataPath)) {
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
return (
metadata.version === this.VERSION &&
metadata.algorithm === this.ALGORITHM
);
} catch {
return false;
}
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
const fileBuffer = fs.readFileSync(filePath);
if (fileBuffer.length < 4) return false;
const metadataLength = fileBuffer.readUInt32BE(0);
const metadataEnd = 4 + metadataLength;
if (metadataLength <= 0 || metadataEnd > fileBuffer.length) {
return false;
}
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataJson);
return (
metadata.version === this.VERSION &&
metadata.algorithm === this.ALGORITHM
metadata.algorithm === this.ALGORITHM &&
!!metadata.iv &&
!!metadata.tag
);
} catch {
return false;
@@ -322,6 +548,125 @@ class DatabaseFileEncryption {
}
}
static getDiagnosticInfo(encryptedPath: string): {
dataFile: {
exists: boolean;
size?: number;
mtime?: string;
readable?: boolean;
};
metadataFile: {
exists: boolean;
size?: number;
mtime?: string;
readable?: boolean;
content?: EncryptedFileMetadata;
};
environment: {
dataDir: string;
envPath: string;
envFileExists: boolean;
envFileReadable: boolean;
hasEnvKey: boolean;
envKeyLength: number;
};
validation: {
filesConsistent: boolean;
sizeMismatch?: boolean;
expectedSize?: number;
actualSize?: number;
};
} {
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
const result: ReturnType<typeof this.getDiagnosticInfo> = {
dataFile: { exists: false },
metadataFile: { exists: false },
environment: {
dataDir,
envPath,
envFileExists: false,
envFileReadable: false,
hasEnvKey: !!process.env.DATABASE_KEY,
envKeyLength: process.env.DATABASE_KEY?.length || 0,
},
validation: {
filesConsistent: false,
},
};
try {
result.dataFile.exists = fs.existsSync(encryptedPath);
if (result.dataFile.exists) {
try {
fs.accessSync(encryptedPath, fs.constants.R_OK);
result.dataFile.readable = true;
const stats = fs.statSync(encryptedPath);
result.dataFile.size = stats.size;
result.dataFile.mtime = stats.mtime.toISOString();
} catch {
result.dataFile.readable = false;
}
}
result.metadataFile.exists = fs.existsSync(metadataPath);
if (result.metadataFile.exists) {
try {
fs.accessSync(metadataPath, fs.constants.R_OK);
result.metadataFile.readable = true;
const stats = fs.statSync(metadataPath);
result.metadataFile.size = stats.size;
result.metadataFile.mtime = stats.mtime.toISOString();
const content = fs.readFileSync(metadataPath, "utf8");
result.metadataFile.content = JSON.parse(content);
} catch {
result.metadataFile.readable = false;
}
}
result.environment.envFileExists = fs.existsSync(envPath);
if (result.environment.envFileExists) {
try {
fs.accessSync(envPath, fs.constants.R_OK);
result.environment.envFileReadable = true;
} catch (error) {}
}
if (
result.dataFile.exists &&
result.metadataFile.exists &&
result.metadataFile.content
) {
result.validation.filesConsistent = true;
if (result.metadataFile.content.dataSize !== undefined) {
result.validation.expectedSize = result.metadataFile.content.dataSize;
result.validation.actualSize = result.dataFile.size;
result.validation.sizeMismatch =
result.metadataFile.content.dataSize !== result.dataFile.size;
if (result.validation.sizeMismatch) {
result.validation.filesConsistent = false;
}
}
}
} catch (error) {
databaseLogger.error("Failed to generate diagnostic info", error, {
operation: "diagnostic_info_failed",
encryptedPath,
});
}
databaseLogger.info("Database encryption diagnostic info", {
operation: "diagnostic_info_generated",
...result,
});
return result;
}
static async createEncryptedBackup(
databasePath: string,
backupDir: string,

View File

@@ -82,7 +82,7 @@ export class LazyFieldEncryption {
legacyFieldName,
);
return decrypted;
} catch {}
} catch (error) {}
}
const sensitiveFields = [
@@ -174,7 +174,7 @@ export class LazyFieldEncryption {
wasPlaintext: false,
wasLegacyEncryption: true,
};
} catch {}
} catch (error) {}
}
return {
encrypted: fieldValue,

View File

@@ -0,0 +1,146 @@
interface LoginAttempt {
count: number;
firstAttempt: number;
lockedUntil?: number;
}
class LoginRateLimiter {
private ipAttempts = new Map<string, LoginAttempt>();
private usernameAttempts = new Map<string, LoginAttempt>();
private readonly MAX_ATTEMPTS = 5;
private readonly WINDOW_MS = 10 * 60 * 1000;
private readonly LOCKOUT_MS = 10 * 60 * 1000;
constructor() {
setInterval(() => this.cleanup(), 5 * 60 * 1000);
}
private cleanup(): void {
const now = Date.now();
for (const [ip, attempt] of this.ipAttempts.entries()) {
if (attempt.lockedUntil && attempt.lockedUntil < now) {
this.ipAttempts.delete(ip);
} else if (
!attempt.lockedUntil &&
now - attempt.firstAttempt > this.WINDOW_MS
) {
this.ipAttempts.delete(ip);
}
}
for (const [username, attempt] of this.usernameAttempts.entries()) {
if (attempt.lockedUntil && attempt.lockedUntil < now) {
this.usernameAttempts.delete(username);
} else if (
!attempt.lockedUntil &&
now - attempt.firstAttempt > this.WINDOW_MS
) {
this.usernameAttempts.delete(username);
}
}
}
recordFailedAttempt(ip: string, username?: string): void {
const now = Date.now();
const ipAttempt = this.ipAttempts.get(ip);
if (!ipAttempt) {
this.ipAttempts.set(ip, {
count: 1,
firstAttempt: now,
});
} else if (now - ipAttempt.firstAttempt > this.WINDOW_MS) {
this.ipAttempts.set(ip, {
count: 1,
firstAttempt: now,
});
} else {
ipAttempt.count++;
if (ipAttempt.count >= this.MAX_ATTEMPTS) {
ipAttempt.lockedUntil = now + this.LOCKOUT_MS;
}
}
if (username) {
const userAttempt = this.usernameAttempts.get(username);
if (!userAttempt) {
this.usernameAttempts.set(username, {
count: 1,
firstAttempt: now,
});
} else if (now - userAttempt.firstAttempt > this.WINDOW_MS) {
this.usernameAttempts.set(username, {
count: 1,
firstAttempt: now,
});
} else {
userAttempt.count++;
if (userAttempt.count >= this.MAX_ATTEMPTS) {
userAttempt.lockedUntil = now + this.LOCKOUT_MS;
}
}
}
}
resetAttempts(ip: string, username?: string): void {
this.ipAttempts.delete(ip);
if (username) {
this.usernameAttempts.delete(username);
}
}
isLocked(
ip: string,
username?: string,
): { locked: boolean; remainingTime?: number } {
const now = Date.now();
const ipAttempt = this.ipAttempts.get(ip);
if (ipAttempt?.lockedUntil && ipAttempt.lockedUntil > now) {
return {
locked: true,
remainingTime: Math.ceil((ipAttempt.lockedUntil - now) / 1000),
};
}
if (username) {
const userAttempt = this.usernameAttempts.get(username);
if (userAttempt?.lockedUntil && userAttempt.lockedUntil > now) {
return {
locked: true,
remainingTime: Math.ceil((userAttempt.lockedUntil - now) / 1000),
};
}
}
return { locked: false };
}
getRemainingAttempts(ip: string, username?: string): number {
const now = Date.now();
let minRemaining = this.MAX_ATTEMPTS;
const ipAttempt = this.ipAttempts.get(ip);
if (ipAttempt && now - ipAttempt.firstAttempt <= this.WINDOW_MS) {
const ipRemaining = Math.max(0, this.MAX_ATTEMPTS - ipAttempt.count);
minRemaining = Math.min(minRemaining, ipRemaining);
}
if (username) {
const userAttempt = this.usernameAttempts.get(username);
if (userAttempt && now - userAttempt.firstAttempt <= this.WINDOW_MS) {
const userRemaining = Math.max(
0,
this.MAX_ATTEMPTS - userAttempt.count,
);
minRemaining = Math.min(minRemaining, userRemaining);
}
}
return minRemaining;
}
}
export const loginRateLimiter = new LoginRateLimiter();

View File

@@ -1,4 +1,5 @@
import ssh2Pkg from "ssh2";
import { sshLogger } from "./logger.js";
const ssh2Utils = ssh2Pkg.utils;
function detectKeyTypeFromContent(keyContent: string): string {
@@ -84,7 +85,7 @@ function detectKeyTypeFromContent(keyContent: string): string {
} else if (decodedString.includes("1.3.101.112")) {
return "ssh-ed25519";
}
} catch {}
} catch (error) {}
if (content.length < 800) {
return "ssh-ed25519";
@@ -140,7 +141,7 @@ function detectPublicKeyTypeFromContent(publicKeyContent: string): string {
} else if (decodedString.includes("1.3.101.112")) {
return "ssh-ed25519";
}
} catch {}
} catch (error) {}
if (content.length < 400) {
return "ssh-ed25519";
@@ -242,7 +243,7 @@ export function parseSSHKey(
useSSH2 = true;
}
} catch {}
} catch (error) {}
}
if (!useSSH2) {
@@ -268,7 +269,7 @@ export function parseSSHKey(
success: true,
};
}
} catch {}
} catch (error) {}
return {
privateKey: privateKeyData,

View File

@@ -51,17 +51,8 @@ class SystemCrypto {
},
);
}
} catch (fileError) {
databaseLogger.warn("Failed to read .env file for JWT secret", {
operation: "jwt_init_file_read_failed",
error:
fileError instanceof Error ? fileError.message : "Unknown error",
});
}
} catch (fileError) {}
databaseLogger.warn("Generating new JWT secret", {
operation: "jwt_generating_new_secret",
});
await this.generateAndGuideUser();
} catch (error) {
databaseLogger.error("Failed to initialize JWT secret", error, {
@@ -80,29 +71,44 @@ class SystemCrypto {
async initializeDatabaseKey(): Promise<void> {
try {
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
const envKey = process.env.DATABASE_KEY;
if (envKey && envKey.length >= 64) {
this.databaseKey = Buffer.from(envKey, "hex");
const keyFingerprint = crypto
.createHash("sha256")
.update(this.databaseKey)
.digest("hex")
.substring(0, 16);
return;
}
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
try {
const envContent = await fs.readFile(envPath, "utf8");
const dbKeyMatch = envContent.match(/^DATABASE_KEY=(.+)$/m);
if (dbKeyMatch && dbKeyMatch[1] && dbKeyMatch[1].length >= 64) {
this.databaseKey = Buffer.from(dbKeyMatch[1], "hex");
process.env.DATABASE_KEY = dbKeyMatch[1];
const keyFingerprint = crypto
.createHash("sha256")
.update(this.databaseKey)
.digest("hex")
.substring(0, 16);
return;
} else {
}
} catch {}
} catch (fileError) {}
await this.generateAndGuideDatabaseKey();
} catch (error) {
databaseLogger.error("Failed to initialize database key", error, {
operation: "db_key_init_failed",
dataDir: process.env.DATA_DIR || "./db/data",
});
throw new Error("Database key initialization failed");
}
@@ -134,7 +140,7 @@ class SystemCrypto {
process.env.INTERNAL_AUTH_TOKEN = tokenMatch[1];
return;
}
} catch {}
} catch (error) {}
await this.generateAndGuideInternalAuthToken();
} catch (error) {

View File

@@ -21,8 +21,8 @@ interface EncryptedDEK {
interface UserSession {
dataKey: Buffer;
lastActivity: number;
expiresAt: number;
lastActivity?: number;
}
class UserCrypto {
@@ -33,8 +33,6 @@ class UserCrypto {
private static readonly PBKDF2_ITERATIONS = 100000;
private static readonly KEK_LENGTH = 32;
private static readonly DEK_LENGTH = 32;
private static readonly SESSION_DURATION = 24 * 60 * 60 * 1000;
private static readonly MAX_INACTIVITY = 6 * 60 * 60 * 1000;
private constructor() {
setInterval(
@@ -69,7 +67,10 @@ class UserCrypto {
DEK.fill(0);
}
async setupOIDCUserEncryption(userId: string): Promise<void> {
async setupOIDCUserEncryption(
userId: string,
sessionDurationMs: number,
): Promise<void> {
const existingEncryptedDEK = await this.getEncryptedDEK(userId);
let DEK: Buffer;
@@ -104,14 +105,17 @@ class UserCrypto {
const now = Date.now();
this.userSessions.set(userId, {
dataKey: Buffer.from(DEK),
lastActivity: now,
expiresAt: now + UserCrypto.SESSION_DURATION,
expiresAt: now + sessionDurationMs,
});
DEK.fill(0);
}
async authenticateUser(userId: string, password: string): Promise<boolean> {
async authenticateUser(
userId: string,
password: string,
sessionDurationMs: number,
): Promise<boolean> {
try {
const kekSalt = await this.getKEKSalt(userId);
if (!kekSalt) return false;
@@ -144,8 +148,7 @@ class UserCrypto {
this.userSessions.set(userId, {
dataKey: Buffer.from(DEK),
lastActivity: now,
expiresAt: now + UserCrypto.SESSION_DURATION,
expiresAt: now + sessionDurationMs,
});
DEK.fill(0);
@@ -161,13 +164,49 @@ class UserCrypto {
}
}
async authenticateOIDCUser(userId: string): Promise<boolean> {
async authenticateOIDCUser(
userId: string,
sessionDurationMs: number,
): Promise<boolean> {
try {
const oidcEncryptedDEK = await this.getOIDCEncryptedDEK(userId);
if (oidcEncryptedDEK) {
const systemKey = this.deriveOIDCSystemKey(userId);
const DEK = this.decryptDEK(oidcEncryptedDEK, systemKey);
systemKey.fill(0);
if (!DEK || DEK.length === 0) {
databaseLogger.error(
"Failed to decrypt OIDC DEK for dual-auth user",
{
operation: "oidc_auth_dual_decrypt_failed",
userId,
},
);
return false;
}
const now = Date.now();
const oldSession = this.userSessions.get(userId);
if (oldSession) {
oldSession.dataKey.fill(0);
}
this.userSessions.set(userId, {
dataKey: Buffer.from(DEK),
expiresAt: now + sessionDurationMs,
});
DEK.fill(0);
return true;
}
const kekSalt = await this.getKEKSalt(userId);
const encryptedDEK = await this.getEncryptedDEK(userId);
if (!kekSalt || !encryptedDEK) {
await this.setupOIDCUserEncryption(userId);
await this.setupOIDCUserEncryption(userId, sessionDurationMs);
return true;
}
@@ -176,7 +215,7 @@ class UserCrypto {
systemKey.fill(0);
if (!DEK || DEK.length === 0) {
await this.setupOIDCUserEncryption(userId);
await this.setupOIDCUserEncryption(userId, sessionDurationMs);
return true;
}
@@ -189,15 +228,19 @@ class UserCrypto {
this.userSessions.set(userId, {
dataKey: Buffer.from(DEK),
lastActivity: now,
expiresAt: now + UserCrypto.SESSION_DURATION,
expiresAt: now + sessionDurationMs,
});
DEK.fill(0);
return true;
} catch {
await this.setupOIDCUserEncryption(userId);
} catch (error) {
databaseLogger.error("OIDC authentication failed", error, {
operation: "oidc_auth_error",
userId,
error: error instanceof Error ? error.message : "Unknown",
});
await this.setupOIDCUserEncryption(userId, sessionDurationMs);
return true;
}
}
@@ -219,16 +262,6 @@ class UserCrypto {
return null;
}
if (now - session.lastActivity > UserCrypto.MAX_INACTIVITY) {
this.userSessions.delete(userId);
session.dataKey.fill(0);
if (this.sessionExpiredCallback) {
this.sessionExpiredCallback(userId);
}
return null;
}
session.lastActivity = now;
return session.dataKey;
}
@@ -331,6 +364,83 @@ class UserCrypto {
}
}
/**
* Convert a password-based user's encryption to DUAL-AUTH encryption.
* This is used when linking an OIDC account to a password account for dual-auth.
*
* IMPORTANT: This does NOT delete the password-based KEK salt!
* The user needs to maintain BOTH password and OIDC authentication methods.
* We keep the password KEK salt so password login still works.
* We also store the DEK encrypted with OIDC system key for OIDC login.
*/
async convertToOIDCEncryption(userId: string): Promise<void> {
try {
const existingEncryptedDEK = await this.getEncryptedDEK(userId);
const existingKEKSalt = await this.getKEKSalt(userId);
if (!existingEncryptedDEK && !existingKEKSalt) {
databaseLogger.warn("No existing encryption to convert for user", {
operation: "convert_to_oidc_encryption_skip",
userId,
});
return;
}
const existingDEK = this.getUserDataKey(userId);
if (!existingDEK) {
throw new Error(
"Cannot convert to OIDC encryption - user session not active. Please log in with password first.",
);
}
const systemKey = this.deriveOIDCSystemKey(userId);
const oidcEncryptedDEK = this.encryptDEK(existingDEK, systemKey);
systemKey.fill(0);
const key = `user_encrypted_dek_oidc_${userId}`;
const value = JSON.stringify(oidcEncryptedDEK);
const { getDb } = await import("../database/db/index.js");
const { settings } = await import("../database/db/schema.js");
const { eq } = await import("drizzle-orm");
const existing = await getDb()
.select()
.from(settings)
.where(eq(settings.key, key));
if (existing.length > 0) {
await getDb()
.update(settings)
.set({ value })
.where(eq(settings.key, key));
} else {
await getDb().insert(settings).values({ key, value });
}
databaseLogger.info(
"Converted user encryption to dual-auth (password + OIDC)",
{
operation: "convert_to_oidc_encryption_preserved",
userId,
},
);
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
await saveMemoryDatabaseToFile();
} catch (error) {
databaseLogger.error("Failed to convert to OIDC encryption", error, {
operation: "convert_to_oidc_encryption_error",
userId,
error: error instanceof Error ? error.message : "Unknown error",
});
throw error;
}
}
private async validatePassword(
userId: string,
password: string,
@@ -359,10 +469,7 @@ class UserCrypto {
const expiredUsers: string[] = [];
for (const [userId, session] of this.userSessions.entries()) {
if (
now > session.expiresAt ||
now - session.lastActivity > UserCrypto.MAX_INACTIVITY
) {
if (now > session.expiresAt) {
session.dataKey.fill(0);
expiredUsers.push(userId);
}
@@ -512,6 +619,26 @@ class UserCrypto {
return null;
}
}
private async getOIDCEncryptedDEK(
userId: string,
): Promise<EncryptedDEK | null> {
try {
const key = `user_encrypted_dek_oidc_${userId}`;
const result = await getDb()
.select()
.from(settings)
.where(eq(settings.key, key));
if (result.length === 0) {
return null;
}
return JSON.parse(result[0].value);
} catch {
return null;
}
}
}
export { UserCrypto, type KEKSalt, type EncryptedDEK };