* fix select edit host but not update view (#438)

* fix: Checksum issue with chocolatey

* fix: Remove homebrew old stuff

* Add Korean translation (#439)

Co-authored-by: 송준우 <2484@coreit.co.kr>

* feat: Automate flatpak

* fix: Add imagemagik to electron builder to resolve build error

* fix: Build error with runtime repo flag

* fix: Flatpak runtime error and install freedesktop ver warning

* fix: Flatpak runtime error and install freedesktop ver warning

* feat: Re-add homebrew cask and move scripts to backend

* fix: No sandbox flag issue

* fix: Change name for electron macos cask output

* fix: Sandbox error with Linux

* fix: Remove comming soon for app stores in readme

* Adding Comment at the end of the public_key on the host on deploy (#440)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* -Add New Interface for Credential DB
-Add Credential Name as a comment into the server authorized_key file

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Sudo auto fill password (#441)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Feature Sudo password auto-fill;

* Fix locale json shema;

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Added Italian Language; (#445)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Added Italian Language;

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Auto collapse snippet folders (#448)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* feat: Add collapsable snippets (customizable in user profile)

* Translations (#447)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Added Italian Language;

* Fix translations;

Removed duplicate keys, synchronised other languages using English as the source, translated added keys, fixed inaccurate translations.

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Remove PTY-level keepalive (#449)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Remove PTY-level keepalive to prevent unwanted terminal output; use SSH-level keepalive instead

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* feat: Seperate server stats and tunnel management (improved both UI's) then started initial docker implementation

* fix: finalize adding docker to db

* feat: Add docker management support (local squash)

* Fix RBAC role system bugs and improve UX (#446)

* Fix RBAC role system bugs and improve UX

- Fix user list dropdown selection in host sharing
- Fix role sharing permissions to include role-based access
- Fix translation template interpolation for success messages
- Standardize system roles to admin and user only
- Auto-assign user role to new registrations
- Remove blocking confirmation dialogs in modal contexts
- Add missing i18n keys for common actions
- Fix button type to prevent unintended form submissions

* Enhance RBAC system with UI improvements and security fixes

- Move role assignment to Users tab with per-user role management
- Protect system roles (admin/user) from editing and manual assignment
- Simplify permission system: remove Use level, keep View and Manage
- Hide Update button and Sharing tab for view-only/shared hosts
- Prevent users from sharing hosts with themselves
- Unify table and modal styling across admin panels
- Auto-assign system roles on user registration
- Add permission metadata to host interface

* Add empty state message for role assignment

- Display helpful message when no custom roles available
- Clarify that system roles are auto-assigned
- Add noCustomRolesToAssign translation in English and Chinese

* fix: Prevent credential sharing errors for shared hosts

- Skip credential resolution for shared hosts with credential authentication
  to prevent decryption errors (credentials are encrypted per-user)
- Add warning alert in sharing tab when host uses credential authentication
- Inform users that shared users cannot connect to credential-based hosts
- Add translations for credential sharing warning (EN/ZH)

This prevents authentication failures when sharing hosts configured
with credential authentication while maintaining security by keeping
credentials isolated per user.

* feat: Improve rbac UI and fixes some bugs

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* SOCKS5 support (#452)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* SOCKS5 support

Adding single and chain socks5 proxy support

* fix: cleanup files

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* Notes and Expiry fields add (#453)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Notes and Expiry add

* fix: cleanup files

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* fix: ssh host types

* fix: sudo incorrect styling and remove expiration date

* feat: add sudo password and add diagonal bg's

* fix: snippet running on enter key

* fix: base64 decoding

* fix: improve server stats / rbac

* fix: wrap ssh host json export in hosts array

* feat: auto trim host inputs, fix file manager jump hosts, dashboard prevent duplicates, file manager terminal not size updating, improve left sidebar sorting, hide/show tags, add apperance user profile tab, add new host manager tabs.

* feat: improve terminal connection speed

* fix: sqlite constriant errors and support non-root user (nginx perm issue)

* feat: add beta syntax highlighing to terminal

* feat: update imports and improve admin settings user management

* chore: update translations

* chore: update translations

* feat: Complete light mode implementation with semantic theme system (#450)

- Add comprehensive light/dark mode CSS variables with semantic naming
- Implement theme-aware scrollbars using CSS variables
- Add light mode backgrounds: --bg-base, --bg-elevated, --bg-surface, etc.
- Add theme-aware borders: --border-base, --border-panel, --border-subtle
- Add semantic text colors: --foreground-secondary, --foreground-subtle
- Convert oklch colors to hex for better compatibility
- Add theme awareness to CodeMirror editors
- Update dark mode colors for consistency (background, sidebar, card, muted, input)
- Add Tailwind color mappings for semantic classes

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* fix: syntax errors

* chore: updating/match themes and split admin settings

* feat: add translation workflow and remove old translation.json

* fix: translation workflow error

* fix: translation workflow error

* feat: improve translation system and update workflow

* fix: wrong path for translations

* fix: change translation to flat files

* fix: gh rule error

* chore: auto-translate to multiple languages (#458)

* chore: improve organization and made a few styling changes in host manager

* feat: improve terminal stability and split out the host manager

* fix: add unnversiioned files

* chore: migrate all to use the new theme system

* fix: wrong animation line colors

* fix: rbac implementation general issues (local squash)

* fix: remove unneeded files

* feat: add 10 new langs

* chore: update gitnore

* chore: auto-translate to multiple languages (#459)

* fix: improve tunnel system

* fix: properly split tabs, still need to fix up the host manager

* chore: cleanup files (possible RC)

* feat: add norwegian

* chore: auto-translate to multiple languages (#461)

* fix: small qol fixes and began readme update

* fix: run cleanup script

* feat: add docker docs button

* feat: general bug fixes and readme updates

* fix: translations

* chore: auto-translate to multiple languages (#462)

* fix: cleanup files

* fix: test new translation issue and add better server-stats support

* fix: fix translate error

* chore: auto-translate to multiple languages (#463)

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#465)

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#466)

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#467)

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#468)

* feat: add to readme, a few qol changes, and improve server stats in general

* chore: auto-translate to multiple languages (#469)

* feat: turned disk uage into graph and fixed issue with termina console

* fix: electron build error and hide icons when shared

* chore: run clean

* fix: general server stats issues, file manager decoding, ui qol

* fix: add dashboard line breaks

* fix: docker console error

* fix: docker console not loading and mismatched stripped background for electron

* fix: docker console not loading

* chore: docker console not loading in docker

* chore: translate readme to chinese

* chore: match package lock to package json

* chore: nginx config issue for dokcer console

* chore: auto-translate to multiple languages (#470)

---------

Co-authored-by: Tran Trung Kien <kientt13.7@gmail.com>
Co-authored-by: junu <bigdwarf_@naver.com>
Co-authored-by: 송준우 <2484@coreit.co.kr>
Co-authored-by: SlimGary <trash.slim@gmail.com>
Co-authored-by: Nunzio Marfè <nunzio.marfe@protonmail.com>
Co-authored-by: Wesley Reid <starhound@lostsouls.org>
Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Denis <38875137+Medvedinca@users.noreply.github.com>
Co-authored-by: Peet McKinney <68706879+PeetMcK@users.noreply.github.com>
This commit was merged in pull request #471.
This commit is contained in:
Luke Gustafson
2025-12-31 22:20:12 -06:00
committed by GitHub
parent 7139290d14
commit ad86c2040b
225 changed files with 87356 additions and 17706 deletions

View File

@@ -2,8 +2,8 @@ import express from "express";
import cors from "cors";
import cookieParser from "cookie-parser";
import { getDb } from "./database/db/index.js";
import { recentActivity, sshData } from "./database/db/schema.js";
import { eq, and, desc } from "drizzle-orm";
import { recentActivity, sshData, hostAccess } from "./database/db/schema.js";
import { eq, and, desc, or } from "drizzle-orm";
import { dashboardLogger } from "./utils/logger.js";
import { SimpleDBOps } from "./utils/simple-db-ops.js";
import { AuthManager } from "./utils/auth-manager.js";
@@ -15,7 +15,7 @@ const authManager = AuthManager.getInstance();
const serverStartTime = Date.now();
const activityRateLimiter = new Map<string, number>();
const RATE_LIMIT_MS = 1000; // 1 second window
const RATE_LIMIT_MS = 1000;
app.use(
cors({
@@ -127,9 +127,18 @@ app.post("/activity/log", async (req, res) => {
});
}
if (type !== "terminal" && type !== "file_manager") {
if (
![
"terminal",
"file_manager",
"server_stats",
"tunnel",
"docker",
].includes(type)
) {
return res.status(400).json({
error: "Invalid activity type. Must be 'terminal' or 'file_manager'",
error:
"Invalid activity type. Must be 'terminal', 'file_manager', 'server_stats', 'tunnel', or 'docker'",
});
}
@@ -155,7 +164,7 @@ app.post("/activity/log", async (req, res) => {
entriesToDelete.forEach((key) => activityRateLimiter.delete(key));
}
const hosts = await SimpleDBOps.select(
const ownedHosts = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
@@ -164,8 +173,19 @@ app.post("/activity/log", async (req, res) => {
userId,
);
if (hosts.length === 0) {
return res.status(404).json({ error: "Host not found" });
if (ownedHosts.length === 0) {
const sharedHosts = await getDb()
.select()
.from(hostAccess)
.where(
and(eq(hostAccess.hostId, hostId), eq(hostAccess.userId, userId)),
);
if (sharedHosts.length === 0) {
return res
.status(404)
.json({ error: "Host not found or access denied" });
}
}
const result = (await SimpleDBOps.insert(

View File

@@ -8,6 +8,7 @@ import alertRoutes from "./routes/alerts.js";
import credentialsRoutes from "./routes/credentials.js";
import snippetsRoutes from "./routes/snippets.js";
import terminalRoutes from "./routes/terminal.js";
import rbacRoutes from "./routes/rbac.js";
import cors from "cors";
import fetch from "node-fetch";
import fs from "fs";
@@ -1436,6 +1437,7 @@ app.use("/alerts", alertRoutes);
app.use("/credentials", credentialsRoutes);
app.use("/snippets", snippetsRoutes);
app.use("/terminal", terminalRoutes);
app.use("/rbac", rbacRoutes);
app.use(
(

View File

@@ -201,13 +201,21 @@ async function initializeCompleteDatabase(): Promise<void> {
enable_tunnel INTEGER NOT NULL DEFAULT 1,
tunnel_connections TEXT,
enable_file_manager INTEGER NOT NULL DEFAULT 1,
enable_docker INTEGER NOT NULL DEFAULT 0,
default_path TEXT,
autostart_password TEXT,
autostart_key TEXT,
autostart_key_password TEXT,
force_keyboard_interactive TEXT,
stats_config TEXT,
docker_config TEXT,
terminal_config TEXT,
notes TEXT,
use_socks5 INTEGER,
socks5_host TEXT,
socks5_port INTEGER,
socks5_username TEXT,
socks5_password TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
@@ -328,6 +336,81 @@ async function initializeCompleteDatabase(): Promise<void> {
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS host_access (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT,
role_id INTEGER,
granted_by TEXT NOT NULL,
permission_level TEXT NOT NULL DEFAULT 'use',
expires_at TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_accessed_at TEXT,
access_count INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
description TEXT,
is_system INTEGER NOT NULL DEFAULT 0,
permissions TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS user_roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
role_id INTEGER NOT NULL,
granted_by TEXT,
granted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(user_id, role_id),
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE SET NULL
);
CREATE TABLE IF NOT EXISTS audit_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
username TEXT NOT NULL,
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
resource_name TEXT,
details TEXT,
ip_address TEXT,
user_agent TEXT,
success INTEGER NOT NULL,
error_message TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS session_recordings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
access_id INTEGER,
started_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
ended_at TEXT,
duration INTEGER,
commands TEXT,
dangerous_actions TEXT,
recording_path TEXT,
terminated_by_owner INTEGER DEFAULT 0,
termination_reason TEXT,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (access_id) REFERENCES host_access (id) ON DELETE SET NULL
);
`);
try {
@@ -486,11 +569,30 @@ const migrateSchema = () => {
addColumnIfNotExists("ssh_data", "stats_config", "TEXT");
addColumnIfNotExists("ssh_data", "terminal_config", "TEXT");
addColumnIfNotExists("ssh_data", "quick_actions", "TEXT");
addColumnIfNotExists(
"ssh_data",
"enable_docker",
"INTEGER NOT NULL DEFAULT 0",
);
addColumnIfNotExists("ssh_data", "docker_config", "TEXT");
addColumnIfNotExists("ssh_data", "notes", "TEXT");
addColumnIfNotExists("ssh_data", "use_socks5", "INTEGER");
addColumnIfNotExists("ssh_data", "socks5_host", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_port", "INTEGER");
addColumnIfNotExists("ssh_data", "socks5_username", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_password", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_proxy_chain", "TEXT");
addColumnIfNotExists("ssh_credentials", "private_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "public_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "detected_key_type", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_password", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_key_password", "TEXT");
addColumnIfNotExists("file_manager_recent", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("file_manager_pinned", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("file_manager_shortcuts", "host_id", "INTEGER NOT NULL");
@@ -551,6 +653,317 @@ const migrateSchema = () => {
}
}
try {
sqlite.prepare("SELECT id FROM host_access LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS host_access (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT,
role_id INTEGER,
granted_by TEXT NOT NULL,
permission_level TEXT NOT NULL DEFAULT 'use',
expires_at TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_accessed_at TEXT,
access_count INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create host_access table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT role_id FROM host_access LIMIT 1").get();
} catch {
try {
sqlite.exec("ALTER TABLE host_access ADD COLUMN role_id INTEGER REFERENCES roles(id) ON DELETE CASCADE");
} catch (alterError) {
databaseLogger.warn("Failed to add role_id column", {
operation: "schema_migration",
error: alterError,
});
}
}
try {
sqlite.prepare("SELECT sudo_password FROM ssh_data LIMIT 1").get();
} catch {
try {
sqlite.exec("ALTER TABLE ssh_data ADD COLUMN sudo_password TEXT");
} catch (alterError) {
databaseLogger.warn("Failed to add sudo_password column", {
operation: "schema_migration",
error: alterError,
});
}
}
try {
sqlite.prepare("SELECT id FROM roles LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
description TEXT,
is_system INTEGER NOT NULL DEFAULT 0,
permissions TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create roles table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM user_roles LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS user_roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
role_id INTEGER NOT NULL,
granted_by TEXT,
granted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(user_id, role_id),
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE SET NULL
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create user_roles table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM audit_logs LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS audit_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
username TEXT NOT NULL,
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
resource_name TEXT,
details TEXT,
ip_address TEXT,
user_agent TEXT,
success INTEGER NOT NULL,
error_message TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create audit_logs table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM session_recordings LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS session_recordings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
access_id INTEGER,
started_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
ended_at TEXT,
duration INTEGER,
commands TEXT,
dangerous_actions TEXT,
recording_path TEXT,
terminated_by_owner INTEGER DEFAULT 0,
termination_reason TEXT,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (access_id) REFERENCES host_access (id) ON DELETE SET NULL
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create session_recordings table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM shared_credentials LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS shared_credentials (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_access_id INTEGER NOT NULL,
original_credential_id INTEGER NOT NULL,
target_user_id TEXT NOT NULL,
encrypted_username TEXT NOT NULL,
encrypted_auth_type TEXT NOT NULL,
encrypted_password TEXT,
encrypted_key TEXT,
encrypted_key_password TEXT,
encrypted_key_type TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
needs_re_encryption INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_access_id) REFERENCES host_access (id) ON DELETE CASCADE,
FOREIGN KEY (original_credential_id) REFERENCES ssh_credentials (id) ON DELETE CASCADE,
FOREIGN KEY (target_user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create shared_credentials table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
const existingRoles = sqlite.prepare("SELECT name, is_system FROM roles").all() as Array<{ name: string; is_system: number }>;
try {
const validSystemRoles = ['admin', 'user'];
const unwantedRoleNames = ['superAdmin', 'powerUser', 'readonly', 'member'];
let deletedCount = 0;
const deleteByName = sqlite.prepare("DELETE FROM roles WHERE name = ?");
for (const roleName of unwantedRoleNames) {
const result = deleteByName.run(roleName);
if (result.changes > 0) {
deletedCount += result.changes;
}
}
const deleteOldSystemRole = sqlite.prepare("DELETE FROM roles WHERE name = ? AND is_system = 1");
for (const role of existingRoles) {
if (role.is_system === 1 && !validSystemRoles.includes(role.name) && !unwantedRoleNames.includes(role.name)) {
const result = deleteOldSystemRole.run(role.name);
if (result.changes > 0) {
deletedCount += result.changes;
}
}
}
} catch (cleanupError) {
databaseLogger.warn("Failed to clean up old system roles", {
operation: "schema_migration",
error: cleanupError,
});
}
const systemRoles = [
{
name: "admin",
displayName: "rbac.roles.admin",
description: "Administrator with full access",
permissions: null,
},
{
name: "user",
displayName: "rbac.roles.user",
description: "Regular user",
permissions: null,
},
];
for (const role of systemRoles) {
const existingRole = sqlite.prepare("SELECT id FROM roles WHERE name = ?").get(role.name);
if (!existingRole) {
try {
sqlite.prepare(`
INSERT INTO roles (name, display_name, description, is_system, permissions)
VALUES (?, ?, ?, 1, ?)
`).run(role.name, role.displayName, role.description, role.permissions);
} catch (insertError) {
databaseLogger.warn(`Failed to create system role: ${role.name}`, {
operation: "schema_migration",
error: insertError,
});
}
}
}
try {
const adminUsers = sqlite.prepare("SELECT id FROM users WHERE is_admin = 1").all() as { id: string }[];
const normalUsers = sqlite.prepare("SELECT id FROM users WHERE is_admin = 0").all() as { id: string }[];
const adminRole = sqlite.prepare("SELECT id FROM roles WHERE name = 'admin'").get() as { id: number } | undefined;
const userRole = sqlite.prepare("SELECT id FROM roles WHERE name = 'user'").get() as { id: number } | undefined;
if (adminRole) {
const insertUserRole = sqlite.prepare(`
INSERT OR IGNORE INTO user_roles (user_id, role_id, granted_at)
VALUES (?, ?, CURRENT_TIMESTAMP)
`);
for (const admin of adminUsers) {
try {
insertUserRole.run(admin.id, adminRole.id);
} catch (error) {
// Ignore duplicate errors
}
}
}
if (userRole) {
const insertUserRole = sqlite.prepare(`
INSERT OR IGNORE INTO user_roles (user_id, role_id, granted_at)
VALUES (?, ?, CURRENT_TIMESTAMP)
`);
for (const user of normalUsers) {
try {
insertUserRole.run(user.id, userRole.id);
} catch (error) {
// Ignore duplicate errors
}
}
}
} catch (migrationError) {
databaseLogger.warn("Failed to migrate existing users to roles", {
operation: "schema_migration",
error: migrationError,
});
}
} catch (seedError) {
databaseLogger.warn("Failed to seed system roles", {
operation: "schema_migration",
error: seedError,
});
}
databaseLogger.success("Schema migration completed", {
operation: "schema_migration",
});

View File

@@ -66,6 +66,7 @@ export const sshData = sqliteTable("ssh_data", {
key: text("key", { length: 8192 }),
key_password: text("key_password"),
keyType: text("key_type"),
sudoPassword: text("sudo_password"),
autostartPassword: text("autostart_password"),
autostartKey: text("autostart_key", { length: 8192 }),
@@ -86,10 +87,22 @@ export const sshData = sqliteTable("ssh_data", {
enableFileManager: integer("enable_file_manager", { mode: "boolean" })
.notNull()
.default(true),
enableDocker: integer("enable_docker", { mode: "boolean" })
.notNull()
.default(false),
defaultPath: text("default_path"),
statsConfig: text("stats_config"),
terminalConfig: text("terminal_config"),
quickActions: text("quick_actions"),
notes: text("notes"),
useSocks5: integer("use_socks5", { mode: "boolean" }),
socks5Host: text("socks5_host"),
socks5Port: integer("socks5_port"),
socks5Username: text("socks5_username"),
socks5Password: text("socks5_password"),
socks5ProxyChain: text("socks5_proxy_chain"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
@@ -172,6 +185,11 @@ export const sshCredentials = sqliteTable("ssh_credentials", {
key_password: text("key_password"),
keyType: text("key_type"),
detectedKeyType: text("detected_key_type"),
systemPassword: text("system_password"),
systemKey: text("system_key", { length: 16384 }),
systemKeyPassword: text("system_key_password"),
usageCount: integer("usage_count").notNull().default(0),
lastUsed: text("last_used"),
createdAt: text("created_at")
@@ -276,3 +294,156 @@ export const commandHistory = sqliteTable("command_history", {
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const hostAccess = sqliteTable("host_access", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.references(() => users.id, { onDelete: "cascade" }),
roleId: integer("role_id")
.references(() => roles.id, { onDelete: "cascade" }),
grantedBy: text("granted_by")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
permissionLevel: text("permission_level")
.notNull()
.default("view"),
expiresAt: text("expires_at"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
lastAccessedAt: text("last_accessed_at"),
accessCount: integer("access_count").notNull().default(0),
});
export const sharedCredentials = sqliteTable("shared_credentials", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostAccessId: integer("host_access_id")
.notNull()
.references(() => hostAccess.id, { onDelete: "cascade" }),
originalCredentialId: integer("original_credential_id")
.notNull()
.references(() => sshCredentials.id, { onDelete: "cascade" }),
targetUserId: text("target_user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
encryptedUsername: text("encrypted_username").notNull(),
encryptedAuthType: text("encrypted_auth_type").notNull(),
encryptedPassword: text("encrypted_password"),
encryptedKey: text("encrypted_key", { length: 16384 }),
encryptedKeyPassword: text("encrypted_key_password"),
encryptedKeyType: text("encrypted_key_type"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
needsReEncryption: integer("needs_re_encryption", { mode: "boolean" })
.notNull()
.default(false),
});
export const roles = sqliteTable("roles", {
id: integer("id").primaryKey({ autoIncrement: true }),
name: text("name").notNull().unique(),
displayName: text("display_name").notNull(),
description: text("description"),
isSystem: integer("is_system", { mode: "boolean" })
.notNull()
.default(false),
permissions: text("permissions"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const userRoles = sqliteTable("user_roles", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
roleId: integer("role_id")
.notNull()
.references(() => roles.id, { onDelete: "cascade" }),
grantedBy: text("granted_by").references(() => users.id, {
onDelete: "set null",
}),
grantedAt: text("granted_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const auditLogs = sqliteTable("audit_logs", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
username: text("username").notNull(),
action: text("action").notNull(),
resourceType: text("resource_type").notNull(),
resourceId: text("resource_id"),
resourceName: text("resource_name"),
details: text("details"),
ipAddress: text("ip_address"),
userAgent: text("user_agent"),
success: integer("success", { mode: "boolean" }).notNull(),
errorMessage: text("error_message"),
timestamp: text("timestamp")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const sessionRecordings = sqliteTable("session_recordings", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
accessId: integer("access_id").references(() => hostAccess.id, {
onDelete: "set null",
}),
startedAt: text("started_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
endedAt: text("ended_at"),
duration: integer("duration"),
commands: text("commands"),
dangerousActions: text("dangerous_actions"),
recordingPath: text("recording_path"),
terminatedByOwner: integer("terminated_by_owner", { mode: "boolean" })
.default(false),
terminationReason: text("termination_reason"),
});

View File

@@ -1,7 +1,15 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import type {
AuthenticatedRequest,
CredentialBackend,
} from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { sshCredentials, sshCredentialUsage, sshData } from "../db/schema.js";
import {
sshCredentials,
sshCredentialUsage,
sshData,
hostAccess,
} from "../db/schema.js";
import { eq, and, desc, sql } from "drizzle-orm";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
@@ -470,6 +478,14 @@ router.put(
userId,
);
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
await sharedCredManager.updateSharedCredentialsForOriginal(
parseInt(id),
userId,
);
const credential = updated[0];
authLogger.success(
`SSH credential updated: ${credential.name} (${credential.authType}) by user ${userId}`,
@@ -524,8 +540,6 @@ router.delete(
return res.status(404).json({ error: "Credential not found" });
}
// Update hosts using this credential to set credentialId to null
// This prevents orphaned references before deletion
const hostsUsingCredential = await db
.select()
.from(sshData)
@@ -552,10 +566,32 @@ router.delete(
eq(sshData.userId, userId),
),
);
for (const host of hostsUsingCredential) {
const revokedShares = await db
.delete(hostAccess)
.where(eq(hostAccess.hostId, host.id))
.returning({ id: hostAccess.id });
if (revokedShares.length > 0) {
authLogger.info(
"Auto-revoked host shares due to credential deletion",
{
operation: "auto_revoke_shares",
hostId: host.id,
credentialId: parseInt(id),
revokedCount: revokedShares.length,
reason: "credential_deleted",
},
);
}
}
}
// sshCredentialUsage will be automatically deleted by ON DELETE CASCADE
// No need for manual deletion
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
await sharedCredManager.deleteSharedCredentialsForOriginal(parseInt(id));
await db
.delete(sshCredentials)
@@ -1124,10 +1160,9 @@ router.post(
async function deploySSHKeyToHost(
hostConfig: Record<string, unknown>,
publicKey: string,
// eslint-disable-next-line @typescript-eslint/no-unused-vars
_credentialData: Record<string, unknown>,
credData: CredentialBackend,
): Promise<{ success: boolean; message?: string; error?: string }> {
const publicKey = credData.public_key as string;
return new Promise((resolve) => {
const conn = new Client();
@@ -1248,7 +1283,7 @@ async function deploySSHKeyToHost(
.replace(/'/g, "'\\''");
conn.exec(
`printf '%s\\n' '${escapedKey}' >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys`,
`printf '%s\\n' '${escapedKey} ${credData.name}@Termix' >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys`,
(err, stream) => {
if (err) {
clearTimeout(addTimeout);
@@ -1510,7 +1545,7 @@ router.post(
});
}
const credData = credential[0];
const credData = credential[0] as unknown as CredentialBackend;
if (credData.authType !== "key") {
return res.status(400).json({
@@ -1519,7 +1554,7 @@ router.post(
});
}
const publicKey = credData.public_key || credData.publicKey;
const publicKey = credData.public_key;
if (!publicKey) {
return res.status(400).json({
success: false,
@@ -1599,11 +1634,7 @@ router.post(
}
}
const deployResult = await deploySSHKeyToHost(
hostConfig,
publicKey as string,
credData,
);
const deployResult = await deploySSHKeyToHost(hostConfig, credData);
if (deployResult.success) {
res.json({

View File

@@ -0,0 +1,850 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import {
hostAccess,
sshData,
users,
roles,
userRoles,
auditLogs,
sharedCredentials,
} from "../db/schema.js";
import { eq, and, desc, sql, or, isNull, gte } from "drizzle-orm";
import type { Request, Response } from "express";
import { databaseLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
import { PermissionManager } from "../../utils/permission-manager.js";
const router = express.Router();
const authManager = AuthManager.getInstance();
const permissionManager = PermissionManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
function isNonEmptyString(value: unknown): value is string {
return typeof value === "string" && value.trim().length > 0;
}
//Share a host with a user or role
//POST /rbac/host/:id/share
router.post(
"/host/:id/share",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const userId = req.userId!;
if (isNaN(hostId)) {
return res.status(400).json({ error: "Invalid host ID" });
}
try {
const {
targetType = "user",
targetUserId,
targetRoleId,
durationHours,
permissionLevel = "view",
} = req.body;
if (!["user", "role"].includes(targetType)) {
return res
.status(400)
.json({ error: "Invalid target type. Must be 'user' or 'role'" });
}
if (targetType === "user" && !isNonEmptyString(targetUserId)) {
return res
.status(400)
.json({ error: "Target user ID is required when sharing with user" });
}
if (targetType === "role" && !targetRoleId) {
return res
.status(400)
.json({ error: "Target role ID is required when sharing with role" });
}
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
databaseLogger.warn("Attempt to share host not owned by user", {
operation: "share_host",
userId,
hostId,
});
return res.status(403).json({ error: "Not host owner" });
}
if (!host[0].credentialId) {
return res.status(400).json({
error:
"Only hosts using credentials can be shared. Please create a credential and assign it to this host before sharing.",
code: "CREDENTIAL_REQUIRED_FOR_SHARING",
});
}
if (targetType === "user") {
const targetUser = await db
.select({ id: users.id, username: users.username })
.from(users)
.where(eq(users.id, targetUserId))
.limit(1);
if (targetUser.length === 0) {
return res.status(404).json({ error: "Target user not found" });
}
} else {
const targetRole = await db
.select({ id: roles.id, name: roles.name })
.from(roles)
.where(eq(roles.id, targetRoleId))
.limit(1);
if (targetRole.length === 0) {
return res.status(404).json({ error: "Target role not found" });
}
}
let expiresAt: string | null = null;
if (
durationHours &&
typeof durationHours === "number" &&
durationHours > 0
) {
const expiryDate = new Date();
expiryDate.setHours(expiryDate.getHours() + durationHours);
expiresAt = expiryDate.toISOString();
}
const validLevels = ["view"];
if (!validLevels.includes(permissionLevel)) {
return res.status(400).json({
error: "Invalid permission level. Only 'view' is supported.",
validLevels,
});
}
const whereConditions = [eq(hostAccess.hostId, hostId)];
if (targetType === "user") {
whereConditions.push(eq(hostAccess.userId, targetUserId));
} else {
whereConditions.push(eq(hostAccess.roleId, targetRoleId));
}
const existing = await db
.select()
.from(hostAccess)
.where(and(...whereConditions))
.limit(1);
if (existing.length > 0) {
await db
.update(hostAccess)
.set({
permissionLevel,
expiresAt,
})
.where(eq(hostAccess.id, existing[0].id));
await db
.delete(sharedCredentials)
.where(eq(sharedCredentials.hostAccessId, existing[0].id));
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
if (targetType === "user") {
await sharedCredManager.createSharedCredentialForUser(
existing[0].id,
host[0].credentialId,
targetUserId!,
userId,
);
} else {
await sharedCredManager.createSharedCredentialsForRole(
existing[0].id,
host[0].credentialId,
targetRoleId!,
userId,
);
}
return res.json({
success: true,
message: "Host access updated",
expiresAt,
});
}
const result = await db.insert(hostAccess).values({
hostId,
userId: targetType === "user" ? targetUserId : null,
roleId: targetType === "role" ? targetRoleId : null,
grantedBy: userId,
permissionLevel,
expiresAt,
});
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
if (targetType === "user") {
await sharedCredManager.createSharedCredentialForUser(
result.lastInsertRowid as number,
host[0].credentialId,
targetUserId!,
userId,
);
} else {
await sharedCredManager.createSharedCredentialsForRole(
result.lastInsertRowid as number,
host[0].credentialId,
targetRoleId!,
userId,
);
}
res.json({
success: true,
message: `Host shared successfully with ${targetType}`,
expiresAt,
});
} catch (error) {
databaseLogger.error("Failed to share host", error, {
operation: "share_host",
hostId,
userId,
});
res.status(500).json({ error: "Failed to share host" });
}
},
);
// Revoke host access
// DELETE /rbac/host/:id/access/:accessId
router.delete(
"/host/:id/access/:accessId",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const accessId = parseInt(req.params.accessId, 10);
const userId = req.userId!;
if (isNaN(hostId) || isNaN(accessId)) {
return res.status(400).json({ error: "Invalid ID" });
}
try {
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
return res.status(403).json({ error: "Not host owner" });
}
await db.delete(hostAccess).where(eq(hostAccess.id, accessId));
res.json({ success: true, message: "Access revoked" });
} catch (error) {
databaseLogger.error("Failed to revoke host access", error, {
operation: "revoke_host_access",
hostId,
accessId,
userId,
});
res.status(500).json({ error: "Failed to revoke access" });
}
},
);
// Get host access list
// GET /rbac/host/:id/access
router.get(
"/host/:id/access",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const userId = req.userId!;
if (isNaN(hostId)) {
return res.status(400).json({ error: "Invalid host ID" });
}
try {
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
return res.status(403).json({ error: "Not host owner" });
}
const rawAccessList = await db
.select({
id: hostAccess.id,
userId: hostAccess.userId,
roleId: hostAccess.roleId,
username: users.username,
roleName: roles.name,
roleDisplayName: roles.displayName,
grantedBy: hostAccess.grantedBy,
grantedByUsername: sql<string>`(SELECT username FROM users WHERE id = ${hostAccess.grantedBy})`,
permissionLevel: hostAccess.permissionLevel,
expiresAt: hostAccess.expiresAt,
createdAt: hostAccess.createdAt,
})
.from(hostAccess)
.leftJoin(users, eq(hostAccess.userId, users.id))
.leftJoin(roles, eq(hostAccess.roleId, roles.id))
.where(eq(hostAccess.hostId, hostId))
.orderBy(desc(hostAccess.createdAt));
const accessList = rawAccessList.map((access) => ({
id: access.id,
targetType: access.userId ? "user" : "role",
userId: access.userId,
roleId: access.roleId,
username: access.username,
roleName: access.roleName,
roleDisplayName: access.roleDisplayName,
grantedBy: access.grantedBy,
grantedByUsername: access.grantedByUsername,
permissionLevel: access.permissionLevel,
expiresAt: access.expiresAt,
createdAt: access.createdAt,
}));
res.json({ accessList });
} catch (error) {
databaseLogger.error("Failed to get host access list", error, {
operation: "get_host_access_list",
hostId,
userId,
});
res.status(500).json({ error: "Failed to get access list" });
}
},
);
// Get user's shared hosts (hosts shared WITH this user)
// GET /rbac/shared-hosts
router.get(
"/shared-hosts",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const userId = req.userId!;
try {
const now = new Date().toISOString();
const sharedHosts = await db
.select({
id: sshData.id,
name: sshData.name,
ip: sshData.ip,
port: sshData.port,
username: sshData.username,
folder: sshData.folder,
tags: sshData.tags,
permissionLevel: hostAccess.permissionLevel,
expiresAt: hostAccess.expiresAt,
grantedBy: hostAccess.grantedBy,
ownerUsername: users.username,
})
.from(hostAccess)
.innerJoin(sshData, eq(hostAccess.hostId, sshData.id))
.innerJoin(users, eq(sshData.userId, users.id))
.where(
and(
eq(hostAccess.userId, userId),
or(isNull(hostAccess.expiresAt), gte(hostAccess.expiresAt, now)),
),
)
.orderBy(desc(hostAccess.createdAt));
res.json({ sharedHosts });
} catch (error) {
databaseLogger.error("Failed to get shared hosts", error, {
operation: "get_shared_hosts",
userId,
});
res.status(500).json({ error: "Failed to get shared hosts" });
}
},
);
// Get all roles
// GET /rbac/roles
router.get(
"/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
try {
const allRoles = await db
.select()
.from(roles)
.orderBy(roles.isSystem, roles.name);
const rolesWithParsedPermissions = allRoles.map((role) => ({
...role,
permissions: JSON.parse(role.permissions),
}));
res.json({ roles: rolesWithParsedPermissions });
} catch (error) {
databaseLogger.error("Failed to get roles", error, {
operation: "get_roles",
});
res.status(500).json({ error: "Failed to get roles" });
}
},
);
// Get all roles
// GET /rbac/roles
router.get(
"/roles",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
try {
const rolesList = await db
.select({
id: roles.id,
name: roles.name,
displayName: roles.displayName,
description: roles.description,
isSystem: roles.isSystem,
createdAt: roles.createdAt,
updatedAt: roles.updatedAt,
})
.from(roles)
.orderBy(roles.isSystem, roles.name);
res.json({ roles: rolesList });
} catch (error) {
databaseLogger.error("Failed to get roles", error, {
operation: "get_roles",
});
res.status(500).json({ error: "Failed to get roles" });
}
},
);
// Create new role
// POST /rbac/roles
router.post(
"/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const { name, displayName, description } = req.body;
if (!isNonEmptyString(name) || !isNonEmptyString(displayName)) {
return res.status(400).json({
error: "Role name and display name are required",
});
}
if (!/^[a-z0-9_-]+$/.test(name)) {
return res.status(400).json({
error:
"Role name must contain only lowercase letters, numbers, underscores, and hyphens",
});
}
try {
const existing = await db
.select({ id: roles.id })
.from(roles)
.where(eq(roles.name, name))
.limit(1);
if (existing.length > 0) {
return res.status(409).json({
error: "A role with this name already exists",
});
}
const result = await db.insert(roles).values({
name,
displayName,
description: description || null,
isSystem: false,
permissions: null,
});
const newRoleId = result.lastInsertRowid;
res.status(201).json({
success: true,
roleId: newRoleId,
message: "Role created successfully",
});
} catch (error) {
databaseLogger.error("Failed to create role", error, {
operation: "create_role",
roleName: name,
});
res.status(500).json({ error: "Failed to create role" });
}
},
);
// Update role
// PUT /rbac/roles/:id
router.put(
"/roles/:id",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const roleId = parseInt(req.params.id, 10);
const { displayName, description } = req.body;
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
if (!displayName && description === undefined) {
return res.status(400).json({
error: "At least one field (displayName or description) is required",
});
}
try {
const existingRole = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (existingRole.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
const updates: {
displayName?: string;
description?: string | null;
updatedAt: string;
} = {
updatedAt: new Date().toISOString(),
};
if (displayName) {
updates.displayName = displayName;
}
if (description !== undefined) {
updates.description = description || null;
}
await db.update(roles).set(updates).where(eq(roles.id, roleId));
res.json({
success: true,
message: "Role updated successfully",
});
} catch (error) {
databaseLogger.error("Failed to update role", error, {
operation: "update_role",
roleId,
});
res.status(500).json({ error: "Failed to update role" });
}
},
);
// Delete role
// DELETE /rbac/roles/:id
router.delete(
"/roles/:id",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const roleId = parseInt(req.params.id, 10);
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
try {
const role = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error: "Cannot delete system roles",
});
}
const deletedUserRoles = await db
.delete(userRoles)
.where(eq(userRoles.roleId, roleId))
.returning({ userId: userRoles.userId });
for (const { userId } of deletedUserRoles) {
permissionManager.invalidateUserPermissionCache(userId);
}
const deletedHostAccess = await db
.delete(hostAccess)
.where(eq(hostAccess.roleId, roleId))
.returning({ id: hostAccess.id });
await db.delete(roles).where(eq(roles.id, roleId));
res.json({
success: true,
message: "Role deleted successfully",
});
} catch (error) {
databaseLogger.error("Failed to delete role", error, {
operation: "delete_role",
roleId,
});
res.status(500).json({ error: "Failed to delete role" });
}
},
);
// Assign role to user
// POST /rbac/users/:userId/roles
router.post(
"/users/:userId/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const currentUserId = req.userId!;
try {
const { roleId } = req.body;
if (typeof roleId !== "number") {
return res.status(400).json({ error: "Role ID is required" });
}
const targetUser = await db
.select()
.from(users)
.where(eq(users.id, targetUserId))
.limit(1);
if (targetUser.length === 0) {
return res.status(404).json({ error: "User not found" });
}
const role = await db
.select()
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error:
"System roles (admin, user) are automatically assigned and cannot be manually assigned",
});
}
const existing = await db
.select()
.from(userRoles)
.where(
and(eq(userRoles.userId, targetUserId), eq(userRoles.roleId, roleId)),
)
.limit(1);
if (existing.length > 0) {
return res.status(409).json({ error: "Role already assigned" });
}
await db.insert(userRoles).values({
userId: targetUserId,
roleId,
grantedBy: currentUserId,
});
const hostsSharedWithRole = await db
.select()
.from(hostAccess)
.innerJoin(sshData, eq(hostAccess.hostId, sshData.id))
.where(eq(hostAccess.roleId, roleId));
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
for (const { host_access, ssh_data } of hostsSharedWithRole) {
if (ssh_data.credentialId) {
try {
await sharedCredManager.createSharedCredentialForUser(
host_access.id,
ssh_data.credentialId,
targetUserId,
ssh_data.userId,
);
} catch (error) {
databaseLogger.error(
"Failed to create shared credential for new role member",
error,
{
operation: "assign_role_create_credentials",
targetUserId,
roleId,
hostId: ssh_data.id,
},
);
}
}
}
permissionManager.invalidateUserPermissionCache(targetUserId);
res.json({
success: true,
message: "Role assigned successfully",
});
} catch (error) {
databaseLogger.error("Failed to assign role", error, {
operation: "assign_role",
targetUserId,
});
res.status(500).json({ error: "Failed to assign role" });
}
},
);
// Remove role from user
// DELETE /rbac/users/:userId/roles/:roleId
router.delete(
"/users/:userId/roles/:roleId",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const roleId = parseInt(req.params.roleId, 10);
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
try {
const role = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error:
"System roles (admin, user) are automatically assigned and cannot be removed",
});
}
await db
.delete(userRoles)
.where(
and(eq(userRoles.userId, targetUserId), eq(userRoles.roleId, roleId)),
);
permissionManager.invalidateUserPermissionCache(targetUserId);
res.json({
success: true,
message: "Role removed successfully",
});
} catch (error) {
databaseLogger.error("Failed to remove role", error, {
operation: "remove_role",
targetUserId,
roleId,
});
res.status(500).json({ error: "Failed to remove role" });
}
},
);
// Get user's roles
// GET /rbac/users/:userId/roles
router.get(
"/users/:userId/roles",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const currentUserId = req.userId!;
if (
targetUserId !== currentUserId &&
!(await permissionManager.isAdmin(currentUserId))
) {
return res.status(403).json({ error: "Access denied" });
}
try {
const userRolesList = await db
.select({
id: userRoles.id,
roleId: roles.id,
roleName: roles.name,
roleDisplayName: roles.displayName,
description: roles.description,
isSystem: roles.isSystem,
grantedAt: userRoles.grantedAt,
})
.from(userRoles)
.innerJoin(roles, eq(userRoles.roleId, roles.id))
.where(eq(userRoles.userId, targetUserId));
res.json({ roles: userRolesList });
} catch (error) {
databaseLogger.error("Failed to get user roles", error, {
operation: "get_user_roles",
targetUserId,
});
res.status(500).json({ error: "Failed to get user roles" });
}
},
);
export default router;

View File

@@ -11,13 +11,27 @@ import {
sshFolders,
commandHistory,
recentActivity,
hostAccess,
userRoles,
sessionRecordings,
} from "../db/schema.js";
import { eq, and, desc, isNotNull, or } from "drizzle-orm";
import {
eq,
and,
desc,
isNotNull,
or,
isNull,
gte,
sql,
inArray,
} from "drizzle-orm";
import type { Request, Response } from "express";
import multer from "multer";
import { sshLogger } from "../../utils/logger.js";
import { SimpleDBOps } from "../../utils/simple-db-ops.js";
import { AuthManager } from "../../utils/auth-manager.js";
import { PermissionManager } from "../../utils/permission-manager.js";
import { DataCrypto } from "../../utils/data-crypto.js";
import { SystemCrypto } from "../../utils/system-crypto.js";
import { DatabaseSaveTrigger } from "../db/index.js";
@@ -35,6 +49,7 @@ function isValidPort(port: unknown): port is number {
}
const authManager = AuthManager.getInstance();
const permissionManager = PermissionManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const requireDataAccess = authManager.createDataAccessMiddleware();
@@ -231,10 +246,12 @@ router.post(
key,
keyPassword,
keyType,
sudoPassword,
pin,
enableTerminal,
enableTunnel,
enableFileManager,
enableDocker,
defaultPath,
tunnelConnections,
jumpHosts,
@@ -242,7 +259,16 @@ router.post(
statsConfig,
terminalConfig,
forceKeyboardInteractive,
notes,
useSocks5,
socks5Host,
socks5Port,
socks5Username,
socks5Password,
socks5ProxyChain,
overrideCredentialUsername,
} = hostData;
if (
!isNonEmptyString(userId) ||
!isNonEmptyString(ip) ||
@@ -269,6 +295,7 @@ router.post(
username,
authType: effectiveAuthType,
credentialId: credentialId || null,
overrideCredentialUsername: overrideCredentialUsername ? 1 : 0,
pin: pin ? 1 : 0,
enableTerminal: enableTerminal ? 1 : 0,
enableTunnel: enableTunnel ? 1 : 0,
@@ -280,10 +307,21 @@ router.post(
? JSON.stringify(quickActions)
: null,
enableFileManager: enableFileManager ? 1 : 0,
enableDocker: enableDocker ? 1 : 0,
defaultPath: defaultPath || null,
statsConfig: statsConfig ? JSON.stringify(statsConfig) : null,
terminalConfig: terminalConfig ? JSON.stringify(terminalConfig) : null,
forceKeyboardInteractive: forceKeyboardInteractive ? "true" : "false",
notes: notes || null,
sudoPassword: sudoPassword || null,
useSocks5: useSocks5 ? 1 : 0,
socks5Host: socks5Host || null,
socks5Port: socks5Port || null,
socks5Username: socks5Username || null,
socks5Password: socks5Password || null,
socks5ProxyChain: socks5ProxyChain
? JSON.stringify(socks5ProxyChain)
: null,
};
if (effectiveAuthType === "password") {
@@ -341,12 +379,14 @@ router.post(
? JSON.parse(createdHost.jumpHosts as string)
: [],
enableFileManager: !!createdHost.enableFileManager,
enableDocker: !!createdHost.enableDocker,
statsConfig: createdHost.statsConfig
? JSON.parse(createdHost.statsConfig as string)
: undefined,
};
const resolvedHost = (await resolveHostCredentials(baseHost)) || baseHost;
const resolvedHost =
(await resolveHostCredentials(baseHost, userId)) || baseHost;
sshLogger.success(
`SSH host created: ${name} (${ip}:${port}) by user ${userId}`,
@@ -453,10 +493,12 @@ router.put(
key,
keyPassword,
keyType,
sudoPassword,
pin,
enableTerminal,
enableTunnel,
enableFileManager,
enableDocker,
defaultPath,
tunnelConnections,
jumpHosts,
@@ -464,7 +506,16 @@ router.put(
statsConfig,
terminalConfig,
forceKeyboardInteractive,
notes,
useSocks5,
socks5Host,
socks5Port,
socks5Username,
socks5Password,
socks5ProxyChain,
overrideCredentialUsername,
} = hostData;
if (
!isNonEmptyString(userId) ||
!isNonEmptyString(ip) ||
@@ -492,6 +543,7 @@ router.put(
username,
authType: effectiveAuthType,
credentialId: credentialId || null,
overrideCredentialUsername: overrideCredentialUsername ? 1 : 0,
pin: pin ? 1 : 0,
enableTerminal: enableTerminal ? 1 : 0,
enableTunnel: enableTunnel ? 1 : 0,
@@ -503,10 +555,21 @@ router.put(
? JSON.stringify(quickActions)
: null,
enableFileManager: enableFileManager ? 1 : 0,
enableDocker: enableDocker ? 1 : 0,
defaultPath: defaultPath || null,
statsConfig: statsConfig ? JSON.stringify(statsConfig) : null,
terminalConfig: terminalConfig ? JSON.stringify(terminalConfig) : null,
forceKeyboardInteractive: forceKeyboardInteractive ? "true" : "false",
notes: notes || null,
sudoPassword: sudoPassword || null,
useSocks5: useSocks5 ? 1 : 0,
socks5Host: socks5Host || null,
socks5Port: socks5Port || null,
socks5Username: socks5Username || null,
socks5Password: socks5Password || null,
socks5ProxyChain: socks5ProxyChain
? JSON.stringify(socks5ProxyChain)
: null,
};
if (effectiveAuthType === "password") {
@@ -535,23 +598,100 @@ router.put(
}
try {
const accessInfo = await permissionManager.canAccessHost(
userId,
Number(hostId),
"write",
);
if (!accessInfo.hasAccess) {
sshLogger.warn("User does not have permission to update host", {
operation: "host_update",
hostId: parseInt(hostId),
userId,
});
return res.status(403).json({ error: "Access denied" });
}
if (!accessInfo.isOwner) {
sshLogger.warn("Shared user attempted to update host (view-only)", {
operation: "host_update",
hostId: parseInt(hostId),
userId,
});
return res.status(403).json({
error: "Only the host owner can modify host configuration",
});
}
const hostRecord = await db
.select({
userId: sshData.userId,
credentialId: sshData.credentialId,
authType: sshData.authType,
})
.from(sshData)
.where(eq(sshData.id, Number(hostId)))
.limit(1);
if (hostRecord.length === 0) {
sshLogger.warn("Host not found for update", {
operation: "host_update",
hostId: parseInt(hostId),
userId,
});
return res.status(404).json({ error: "Host not found" });
}
const ownerId = hostRecord[0].userId;
if (
!accessInfo.isOwner &&
sshDataObj.credentialId !== undefined &&
sshDataObj.credentialId !== hostRecord[0].credentialId
) {
return res.status(403).json({
error: "Only the host owner can change the credential",
});
}
if (
!accessInfo.isOwner &&
sshDataObj.authType !== undefined &&
sshDataObj.authType !== hostRecord[0].authType
) {
return res.status(403).json({
error: "Only the host owner can change the authentication type",
});
}
if (sshDataObj.credentialId !== undefined) {
if (
hostRecord[0].credentialId !== null &&
sshDataObj.credentialId === null
) {
const revokedShares = await db
.delete(hostAccess)
.where(eq(hostAccess.hostId, Number(hostId)))
.returning({ id: hostAccess.id, userId: hostAccess.userId });
}
}
await SimpleDBOps.update(
sshData,
"ssh_data",
and(eq(sshData.id, Number(hostId)), eq(sshData.userId, userId)),
eq(sshData.id, Number(hostId)),
sshDataObj,
userId,
ownerId,
);
const updatedHosts = await SimpleDBOps.select(
db
.select()
.from(sshData)
.where(
and(eq(sshData.id, Number(hostId)), eq(sshData.userId, userId)),
),
.where(eq(sshData.id, Number(hostId))),
"ssh_data",
userId,
ownerId,
);
if (updatedHosts.length === 0) {
@@ -582,12 +722,17 @@ router.put(
? JSON.parse(updatedHost.jumpHosts as string)
: [],
enableFileManager: !!updatedHost.enableFileManager,
enableDocker: !!updatedHost.enableDocker,
statsConfig: updatedHost.statsConfig
? JSON.parse(updatedHost.statsConfig as string)
: undefined,
dockerConfig: updatedHost.dockerConfig
? JSON.parse(updatedHost.dockerConfig as string)
: undefined,
};
const resolvedHost = (await resolveHostCredentials(baseHost)) || baseHost;
const resolvedHost =
(await resolveHostCredentials(baseHost, userId)) || baseHost;
sshLogger.success(
`SSH host updated: ${name} (${ip}:${port}) by user ${userId}`,
@@ -656,11 +801,115 @@ router.get(
return res.status(400).json({ error: "Invalid userId" });
}
try {
const data = await SimpleDBOps.select(
db.select().from(sshData).where(eq(sshData.userId, userId)),
"ssh_data",
userId,
);
const now = new Date().toISOString();
const userRoleIds = await db
.select({ roleId: userRoles.roleId })
.from(userRoles)
.where(eq(userRoles.userId, userId));
const roleIds = userRoleIds.map((r) => r.roleId);
const rawData = await db
.select({
id: sshData.id,
userId: sshData.userId,
name: sshData.name,
ip: sshData.ip,
port: sshData.port,
username: sshData.username,
folder: sshData.folder,
tags: sshData.tags,
pin: sshData.pin,
authType: sshData.authType,
password: sshData.password,
key: sshData.key,
keyPassword: sshData.key_password,
keyType: sshData.keyType,
enableTerminal: sshData.enableTerminal,
enableTunnel: sshData.enableTunnel,
tunnelConnections: sshData.tunnelConnections,
jumpHosts: sshData.jumpHosts,
enableFileManager: sshData.enableFileManager,
defaultPath: sshData.defaultPath,
autostartPassword: sshData.autostartPassword,
autostartKey: sshData.autostartKey,
autostartKeyPassword: sshData.autostartKeyPassword,
forceKeyboardInteractive: sshData.forceKeyboardInteractive,
statsConfig: sshData.statsConfig,
terminalConfig: sshData.terminalConfig,
createdAt: sshData.createdAt,
updatedAt: sshData.updatedAt,
credentialId: sshData.credentialId,
overrideCredentialUsername: sshData.overrideCredentialUsername,
quickActions: sshData.quickActions,
notes: sshData.notes,
enableDocker: sshData.enableDocker,
useSocks5: sshData.useSocks5,
socks5Host: sshData.socks5Host,
socks5Port: sshData.socks5Port,
socks5Username: sshData.socks5Username,
socks5Password: sshData.socks5Password,
socks5ProxyChain: sshData.socks5ProxyChain,
ownerId: sshData.userId,
isShared: sql<boolean>`${hostAccess.id} IS NOT NULL`,
permissionLevel: hostAccess.permissionLevel,
expiresAt: hostAccess.expiresAt,
})
.from(sshData)
.leftJoin(
hostAccess,
and(
eq(hostAccess.hostId, sshData.id),
or(
eq(hostAccess.userId, userId),
roleIds.length > 0
? inArray(hostAccess.roleId, roleIds)
: sql`false`,
),
or(isNull(hostAccess.expiresAt), gte(hostAccess.expiresAt, now)),
),
)
.where(
or(
eq(sshData.userId, userId),
and(
eq(hostAccess.userId, userId),
or(isNull(hostAccess.expiresAt), gte(hostAccess.expiresAt, now)),
),
roleIds.length > 0
? and(
inArray(hostAccess.roleId, roleIds),
or(
isNull(hostAccess.expiresAt),
gte(hostAccess.expiresAt, now),
),
)
: sql`false`,
),
);
const ownHosts = rawData.filter((row) => row.userId === userId);
const sharedHosts = rawData.filter((row) => row.userId !== userId);
let decryptedOwnHosts: any[] = [];
try {
decryptedOwnHosts = await SimpleDBOps.select(
Promise.resolve(ownHosts),
"ssh_data",
userId,
);
} catch (decryptError) {
sshLogger.error("Failed to decrypt own hosts", decryptError, {
operation: "host_fetch_own_decrypt_failed",
userId,
});
decryptedOwnHosts = [];
}
const sanitizedSharedHosts = sharedHosts;
const data = [...decryptedOwnHosts, ...sanitizedSharedHosts];
const result = await Promise.all(
data.map(async (row: Record<string, unknown>) => {
@@ -683,6 +932,7 @@ router.get(
? JSON.parse(row.quickActions as string)
: [],
enableFileManager: !!row.enableFileManager,
enableDocker: !!row.enableDocker,
statsConfig: row.statsConfig
? JSON.parse(row.statsConfig as string)
: undefined,
@@ -690,9 +940,18 @@ router.get(
? JSON.parse(row.terminalConfig as string)
: undefined,
forceKeyboardInteractive: row.forceKeyboardInteractive === "true",
socks5ProxyChain: row.socks5ProxyChain
? JSON.parse(row.socks5ProxyChain as string)
: [],
isShared: !!row.isShared,
permissionLevel: row.permissionLevel || undefined,
sharedExpiresAt: row.expiresAt || undefined,
};
return (await resolveHostCredentials(baseHost)) || baseHost;
const resolved =
(await resolveHostCredentials(baseHost, userId)) || baseHost;
return resolved;
}),
);
@@ -765,9 +1024,12 @@ router.get(
? JSON.parse(host.terminalConfig)
: undefined,
forceKeyboardInteractive: host.forceKeyboardInteractive === "true",
socks5ProxyChain: host.socks5ProxyChain
? JSON.parse(host.socks5ProxyChain)
: [],
};
res.json((await resolveHostCredentials(result)) || result);
res.json((await resolveHostCredentials(result, userId)) || result);
} catch (err) {
sshLogger.error("Failed to fetch SSH host by ID from database", err, {
operation: "host_fetch_by_id",
@@ -811,7 +1073,7 @@ router.get(
const host = hosts[0];
const resolvedHost = (await resolveHostCredentials(host)) || host;
const resolvedHost = (await resolveHostCredentials(host, userId)) || host;
const exportData = {
name: resolvedHost.name,
@@ -836,6 +1098,9 @@ router.get(
tunnelConnections: resolvedHost.tunnelConnections
? JSON.parse(resolvedHost.tunnelConnections as string)
: [],
socks5ProxyChain: resolvedHost.socks5ProxyChain
? JSON.parse(resolvedHost.socks5ProxyChain as string)
: [],
};
sshLogger.success("Host exported with decrypted credentials", {
@@ -893,57 +1158,33 @@ router.delete(
await db
.delete(fileManagerRecent)
.where(
and(
eq(fileManagerRecent.hostId, numericHostId),
eq(fileManagerRecent.userId, userId),
),
);
.where(eq(fileManagerRecent.hostId, numericHostId));
await db
.delete(fileManagerPinned)
.where(
and(
eq(fileManagerPinned.hostId, numericHostId),
eq(fileManagerPinned.userId, userId),
),
);
.where(eq(fileManagerPinned.hostId, numericHostId));
await db
.delete(fileManagerShortcuts)
.where(
and(
eq(fileManagerShortcuts.hostId, numericHostId),
eq(fileManagerShortcuts.userId, userId),
),
);
.where(eq(fileManagerShortcuts.hostId, numericHostId));
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.hostId, numericHostId),
eq(commandHistory.userId, userId),
),
);
.where(eq(commandHistory.hostId, numericHostId));
await db
.delete(sshCredentialUsage)
.where(
and(
eq(sshCredentialUsage.hostId, numericHostId),
eq(sshCredentialUsage.userId, userId),
),
);
.where(eq(sshCredentialUsage.hostId, numericHostId));
await db
.delete(recentActivity)
.where(
and(
eq(recentActivity.hostId, numericHostId),
eq(recentActivity.userId, userId),
),
);
.where(eq(recentActivity.hostId, numericHostId));
await db.delete(hostAccess).where(eq(hostAccess.hostId, numericHostId));
await db
.delete(sessionRecordings)
.where(eq(sessionRecordings.hostId, numericHostId));
await db
.delete(sshData)
@@ -1450,11 +1691,54 @@ router.delete(
async function resolveHostCredentials(
host: Record<string, unknown>,
requestingUserId?: string,
): Promise<Record<string, unknown>> {
try {
if (host.credentialId && host.userId) {
if (host.credentialId && (host.userId || host.ownerId)) {
const credentialId = host.credentialId as number;
const userId = host.userId as string;
const ownerId = (host.ownerId || host.userId) as string;
if (requestingUserId && requestingUserId !== ownerId) {
try {
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
const sharedCred = await sharedCredManager.getSharedCredentialForUser(
host.id as number,
requestingUserId,
);
if (sharedCred) {
const resolvedHost: Record<string, unknown> = {
...host,
authType: sharedCred.authType,
password: sharedCred.password,
key: sharedCred.key,
keyPassword: sharedCred.keyPassword,
keyType: sharedCred.keyType,
};
if (!host.overrideCredentialUsername) {
resolvedHost.username = sharedCred.username;
}
return resolvedHost;
}
} catch (sharedCredError) {
sshLogger.warn(
"Failed to get shared credential, falling back to owner credential",
{
operation: "resolve_shared_credential_fallback",
hostId: host.id as number,
requestingUserId,
error:
sharedCredError instanceof Error
? sharedCredError.message
: "Unknown error",
},
);
}
}
const credentials = await SimpleDBOps.select(
db
@@ -1463,24 +1747,29 @@ async function resolveHostCredentials(
.where(
and(
eq(sshCredentials.id, credentialId),
eq(sshCredentials.userId, userId),
eq(sshCredentials.userId, ownerId),
),
),
"ssh_credentials",
userId,
ownerId,
);
if (credentials.length > 0) {
const credential = credentials[0];
return {
const resolvedHost: Record<string, unknown> = {
...host,
username: credential.username,
authType: credential.auth_type || credential.authType,
password: credential.password,
key: credential.key,
keyPassword: credential.key_password || credential.keyPassword,
keyType: credential.key_type || credential.keyType,
};
if (!host.overrideCredentialUsername) {
resolvedHost.username = credential.username;
}
return resolvedHost;
}
}
@@ -1680,6 +1969,40 @@ router.delete(
});
}
const hostIds = hostsToDelete.map((host) => host.id);
if (hostIds.length > 0) {
await db
.delete(fileManagerRecent)
.where(inArray(fileManagerRecent.hostId, hostIds));
await db
.delete(fileManagerPinned)
.where(inArray(fileManagerPinned.hostId, hostIds));
await db
.delete(fileManagerShortcuts)
.where(inArray(fileManagerShortcuts.hostId, hostIds));
await db
.delete(commandHistory)
.where(inArray(commandHistory.hostId, hostIds));
await db
.delete(sshCredentialUsage)
.where(inArray(sshCredentialUsage.hostId, hostIds));
await db
.delete(recentActivity)
.where(inArray(recentActivity.hostId, hostIds));
await db.delete(hostAccess).where(inArray(hostAccess.hostId, hostIds));
await db
.delete(sessionRecordings)
.where(inArray(sessionRecordings.hostId, hostIds));
}
await db
.delete(sshData)
.where(and(eq(sshData.userId, userId), eq(sshData.folder, folderName)));
@@ -1782,10 +2105,12 @@ router.post(
continue;
}
if (!["password", "key", "credential"].includes(hostData.authType)) {
if (
!["password", "key", "credential", "none"].includes(hostData.authType)
) {
results.failed++;
results.errors.push(
`Host ${i + 1}: Invalid authType. Must be 'password', 'key', or 'credential'`,
`Host ${i + 1}: Invalid authType. Must be 'password', 'key', 'credential', or 'none'`,
);
continue;
}
@@ -1840,13 +2165,38 @@ router.post(
enableTerminal: hostData.enableTerminal !== false,
enableTunnel: hostData.enableTunnel !== false,
enableFileManager: hostData.enableFileManager !== false,
enableDocker: hostData.enableDocker || false,
defaultPath: hostData.defaultPath || "/",
tunnelConnections: hostData.tunnelConnections
? JSON.stringify(hostData.tunnelConnections)
: "[]",
jumpHosts: hostData.jumpHosts
? JSON.stringify(hostData.jumpHosts)
: null,
quickActions: hostData.quickActions
? JSON.stringify(hostData.quickActions)
: null,
statsConfig: hostData.statsConfig
? JSON.stringify(hostData.statsConfig)
: null,
terminalConfig: hostData.terminalConfig
? JSON.stringify(hostData.terminalConfig)
: null,
forceKeyboardInteractive: hostData.forceKeyboardInteractive
? "true"
: "false",
notes: hostData.notes || null,
useSocks5: hostData.useSocks5 ? 1 : 0,
socks5Host: hostData.socks5Host || null,
socks5Port: hostData.socks5Port || null,
socks5Username: hostData.socks5Username || null,
socks5Password: hostData.socks5Password || null,
socks5ProxyChain: hostData.socks5ProxyChain
? JSON.stringify(hostData.socks5ProxyChain)
: null,
overrideCredentialUsername: hostData.overrideCredentialUsername
? 1
: 0,
createdAt: new Date().toISOString(),
updatedAt: new Date().toISOString(),
};

View File

@@ -15,6 +15,11 @@ import {
sshCredentialUsage,
recentActivity,
snippets,
snippetFolders,
sshFolders,
commandHistory,
roles,
userRoles,
} from "../db/schema.js";
import { eq, and } from "drizzle-orm";
import bcrypt from "bcryptjs";
@@ -134,6 +139,54 @@ function isNonEmptyString(val: unknown): val is string {
const authenticateJWT = authManager.createAuthMiddleware();
const requireAdmin = authManager.createAdminMiddleware();
async function deleteUserAndRelatedData(userId: string): Promise<void> {
try {
await db
.delete(sshCredentialUsage)
.where(eq(sshCredentialUsage.userId, userId));
await db
.delete(fileManagerRecent)
.where(eq(fileManagerRecent.userId, userId));
await db
.delete(fileManagerPinned)
.where(eq(fileManagerPinned.userId, userId));
await db
.delete(fileManagerShortcuts)
.where(eq(fileManagerShortcuts.userId, userId));
await db.delete(recentActivity).where(eq(recentActivity.userId, userId));
await db.delete(dismissedAlerts).where(eq(dismissedAlerts.userId, userId));
await db.delete(snippets).where(eq(snippets.userId, userId));
await db.delete(snippetFolders).where(eq(snippetFolders.userId, userId));
await db.delete(sshFolders).where(eq(sshFolders.userId, userId));
await db.delete(commandHistory).where(eq(commandHistory.userId, userId));
await db.delete(sshData).where(eq(sshData.userId, userId));
await db.delete(sshCredentials).where(eq(sshCredentials.userId, userId));
db.$client
.prepare("DELETE FROM settings WHERE key LIKE ?")
.run(`user_%_${userId}`);
await db.delete(users).where(eq(users.id, userId));
authLogger.success("User and all related data deleted successfully", {
operation: "delete_user_and_related_data_complete",
userId,
});
} catch (error) {
authLogger.error("Failed to delete user and related data", error, {
operation: "delete_user_and_related_data_failed",
userId,
});
throw error;
}
}
// Route: Create traditional user (username/password)
// POST /users/create
router.post("/create", async (req, res) => {
@@ -210,6 +263,34 @@ router.post("/create", async (req, res) => {
totp_backup_codes: null,
});
try {
const defaultRoleName = isFirstUser ? "admin" : "user";
const defaultRole = await db
.select({ id: roles.id })
.from(roles)
.where(eq(roles.name, defaultRoleName))
.limit(1);
if (defaultRole.length > 0) {
await db.insert(userRoles).values({
userId: id,
roleId: defaultRole[0].id,
grantedBy: id,
});
} else {
authLogger.warn("Default role not found during user registration", {
operation: "assign_default_role",
userId: id,
roleName: defaultRoleName,
});
}
} catch (roleError) {
authLogger.error("Failed to assign default role", roleError, {
operation: "assign_default_role",
userId: id,
});
}
try {
await authManager.registerUser(id, password);
} catch (encryptionError) {
@@ -816,6 +897,41 @@ router.get("/oidc/callback", async (req, res) => {
scopes: String(config.scopes),
});
try {
const defaultRoleName = isFirstUser ? "admin" : "user";
const defaultRole = await db
.select({ id: roles.id })
.from(roles)
.where(eq(roles.name, defaultRoleName))
.limit(1);
if (defaultRole.length > 0) {
await db.insert(userRoles).values({
userId: id,
roleId: defaultRole[0].id,
grantedBy: id,
});
} else {
authLogger.warn(
"Default role not found during OIDC user registration",
{
operation: "assign_default_role_oidc",
userId: id,
roleName: defaultRoleName,
},
);
}
} catch (roleError) {
authLogger.error(
"Failed to assign default role to OIDC user",
roleError,
{
operation: "assign_default_role_oidc",
userId: id,
},
);
}
try {
const sessionDurationMs =
deviceInfo.type === "desktop" || deviceInfo.type === "mobile"
@@ -1055,6 +1171,19 @@ router.post("/login", async (req, res) => {
return res.status(401).json({ error: "Incorrect password" });
}
try {
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
await sharedCredManager.reEncryptPendingCredentialsForUser(userRecord.id);
} catch (error) {
authLogger.warn("Failed to re-encrypt pending shared credentials", {
operation: "reencrypt_pending_credentials",
userId: userRecord.id,
error,
});
}
if (userRecord.totp_enabled) {
const tempToken = await authManager.generateJWTToken(userRecord.id, {
pendingTOTP: true,
@@ -1128,15 +1257,7 @@ router.post("/logout", authenticateJWT, async (req, res) => {
try {
const payload = await authManager.verifyJWTToken(token);
sessionId = payload?.sessionId;
} catch (error) {
authLogger.debug(
"Token verification failed during logout (expected if token expired)",
{
operation: "logout_token_verify_failed",
userId,
},
);
}
} catch (error) {}
}
await authManager.logoutUser(userId, sessionId);
@@ -2252,36 +2373,8 @@ router.delete("/delete-user", authenticateJWT, async (req, res) => {
const targetUserId = targetUser[0].id;
try {
await db
.delete(sshCredentialUsage)
.where(eq(sshCredentialUsage.userId, targetUserId));
await db
.delete(fileManagerRecent)
.where(eq(fileManagerRecent.userId, targetUserId));
await db
.delete(fileManagerPinned)
.where(eq(fileManagerPinned.userId, targetUserId));
await db
.delete(fileManagerShortcuts)
.where(eq(fileManagerShortcuts.userId, targetUserId));
await db
.delete(recentActivity)
.where(eq(recentActivity.userId, targetUserId));
await db
.delete(dismissedAlerts)
.where(eq(dismissedAlerts.userId, targetUserId));
await db.delete(snippets).where(eq(snippets.userId, targetUserId));
await db.delete(sshData).where(eq(sshData.userId, targetUserId));
await db
.delete(sshCredentials)
.where(eq(sshCredentials.userId, targetUserId));
} catch (cleanupError) {
authLogger.error(`Cleanup failed for user ${username}:`, cleanupError);
throw cleanupError;
}
await db.delete(users).where(eq(users.id, targetUserId));
// Use the comprehensive deletion utility
await deleteUserAndRelatedData(targetUserId);
authLogger.success(
`User ${username} deleted by admin ${adminUser[0].username}`,
@@ -2696,15 +2789,7 @@ router.post("/link-oidc-to-password", authenticateJWT, async (req, res) => {
await authManager.revokeAllUserSessions(oidcUserId);
authManager.logoutUser(oidcUserId);
await db
.delete(recentActivity)
.where(eq(recentActivity.userId, oidcUserId));
await db.delete(users).where(eq(users.id, oidcUserId));
db.$client
.prepare("DELETE FROM settings WHERE key LIKE ?")
.run(`user_%_${oidcUserId}`);
await deleteUserAndRelatedData(oidcUserId);
try {
const { saveMemoryDatabaseToFile } = await import("../db/index.js");

View File

@@ -0,0 +1,103 @@
#!/bin/bash
set -e
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
ENV_FILE="$PROJECT_ROOT/.env"
log_info() {
echo -e "${BLUE}[SSL Setup]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SSL Setup]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[SSL Setup]${NC} $1"
}
log_error() {
echo -e "${RED}[SSL Setup]${NC} $1"
}
log_header() {
echo -e "${CYAN}$1${NC}"
}
generate_keys() {
log_info "Generating security keys..."
JWT_SECRET=$(openssl rand -hex 32)
log_success "Generated JWT secret"
DATABASE_KEY=$(openssl rand -hex 32)
log_success "Generated database encryption key"
echo "JWT_SECRET=$JWT_SECRET" >> "$ENV_FILE"
echo "DATABASE_KEY=$DATABASE_KEY" >> "$ENV_FILE"
log_success "Security keys added to .env file"
}
setup_env_file() {
log_info "Setting up environment configuration..."
if [[ -f "$ENV_FILE" ]]; then
log_warn ".env file already exists, creating backup..."
cp "$ENV_FILE" "$ENV_FILE.backup.$(date +%s)"
fi
cat > "$ENV_FILE" << EOF
# Termix SSL Configuration - Auto-generated $(date)
# SSL/TLS Configuration
ENABLE_SSL=true
SSL_PORT=8443
SSL_DOMAIN=localhost
PORT=8080
# Node environment
NODE_ENV=production
# CORS configuration
ALLOWED_ORIGINS=*
EOF
generate_keys
log_success "Environment configuration created at $ENV_FILE"
}
setup_ssl_certificates() {
log_info "Setting up SSL certificates..."
if [[ -f "$SCRIPT_DIR/setup-ssl.sh" ]]; then
bash "$SCRIPT_DIR/setup-ssl.sh"
else
log_error "SSL setup script not found at $SCRIPT_DIR/setup-ssl.sh"
exit 1
fi
}
main() {
if ! command -v openssl &> /dev/null; then
log_error "OpenSSL is not installed. Please install OpenSSL first."
exit 1
fi
setup_env_file
setup_ssl_certificates
}
# Run main function
main "$@"

View File

@@ -0,0 +1,121 @@
#!/bin/bash
set -e
SSL_DIR="$(dirname "$0")/../ssl"
CERT_FILE="$SSL_DIR/termix.crt"
KEY_FILE="$SSL_DIR/termix.key"
DAYS_VALID=365
DOMAIN=${SSL_DOMAIN:-"localhost"}
ALT_NAMES=${SSL_ALT_NAMES:-"DNS:localhost,DNS:127.0.0.1,DNS:*.localhost,IP:127.0.0.1"}
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() {
echo -e "${BLUE}[SSL Setup]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SSL Setup]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[SSL Setup]${NC} $1"
}
log_error() {
echo -e "${RED}[SSL Setup]${NC} $1"
}
check_existing_cert() {
if [[ -f "$CERT_FILE" && -f "$KEY_FILE" ]]; then
if openssl x509 -in "$CERT_FILE" -checkend 2592000 -noout 2>/dev/null; then
log_success "Valid SSL certificate already exists"
local expiry=$(openssl x509 -in "$CERT_FILE" -noout -enddate 2>/dev/null | cut -d= -f2)
log_info "Expires: $expiry"
return 0
else
log_warn "Existing certificate is expired or expiring soon"
fi
fi
return 1
}
generate_certificate() {
log_info "Generating new SSL certificate for domain: $DOMAIN"
mkdir -p "$SSL_DIR"
local config_file="$SSL_DIR/openssl.conf"
cat > "$config_file" << EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = v3_req
[dn]
C=US
ST=State
L=City
O=Termix
OU=IT Department
CN=$DOMAIN
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = 127.0.0.1
DNS.3 = *.localhost
IP.1 = 127.0.0.1
EOF
if [[ -n "$SSL_ALT_NAMES" ]]; then
local counter=2
IFS=',' read -ra NAMES <<< "$SSL_ALT_NAMES"
for name in "${NAMES[@]}"; do
name=$(echo "$name" | xargs)
if [[ "$name" == DNS:* ]]; then
echo "DNS.$((counter++)) = ${name#DNS:}" >> "$config_file"
elif [[ "$name" == IP:* ]]; then
echo "IP.$((counter++)) = ${name#IP:}" >> "$config_file"
fi
done
fi
log_info "Generating private key..."
openssl genrsa -out "$KEY_FILE" 2048
log_info "Generating certificate..."
openssl req -new -x509 -key "$KEY_FILE" -out "$CERT_FILE" -days $DAYS_VALID -config "$config_file" -extensions v3_req
chmod 600 "$KEY_FILE"
chmod 644 "$CERT_FILE"
rm -f "$config_file"
log_success "SSL certificate generated successfully"
log_info "Valid for: $DAYS_VALID days"
}
main() {
if ! command -v openssl &> /dev/null; then
log_error "OpenSSL is not installed. Please install OpenSSL first."
exit 1
fi
generate_certificate
}
main "$@"

View File

@@ -0,0 +1,632 @@
import { Client as SSHClient } from "ssh2";
import { WebSocketServer, WebSocket } from "ws";
import { parse as parseUrl } from "url";
import { AuthManager } from "../utils/auth-manager.js";
import { sshData, sshCredentials } from "../database/db/schema.js";
import { and, eq } from "drizzle-orm";
import { getDb } from "../database/db/index.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { systemLogger } from "../utils/logger.js";
import type { SSHHost } from "../../types/index.js";
const dockerConsoleLogger = systemLogger;
interface SSHSession {
client: SSHClient;
stream: any;
isConnected: boolean;
containerId?: string;
shell?: string;
}
const activeSessions = new Map<string, SSHSession>();
const wss = new WebSocketServer({
host: "0.0.0.0",
port: 30008,
verifyClient: async (info) => {
try {
const url = parseUrl(info.req.url || "", true);
const token = url.query.token as string;
if (!token) {
return false;
}
const authManager = AuthManager.getInstance();
const decoded = await authManager.verifyJWTToken(token);
if (!decoded || !decoded.userId) {
return false;
}
return true;
} catch (error) {
return false;
}
},
});
async function detectShell(
session: SSHSession,
containerId: string,
): Promise<string> {
const shells = ["bash", "sh", "ash"];
for (const shell of shells) {
try {
await new Promise<void>((resolve, reject) => {
session.client.exec(
`docker exec ${containerId} which ${shell}`,
(err, stream) => {
if (err) return reject(err);
let output = "";
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.on("close", (code: number) => {
if (code === 0 && output.trim()) {
resolve();
} else {
reject(new Error(`Shell ${shell} not found`));
}
});
stream.stderr.on("data", () => {
// Ignore stderr
});
},
);
});
return shell;
} catch {
continue;
}
}
return "sh";
}
async function createJumpHostChain(
jumpHosts: any[],
userId: string,
): Promise<SSHClient | null> {
if (!jumpHosts || jumpHosts.length === 0) {
return null;
}
let currentClient: SSHClient | null = null;
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostId = jumpHosts[i].hostId;
const jumpHostData = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, jumpHostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (jumpHostData.length === 0) {
throw new Error(`Jump host ${jumpHostId} not found`);
}
const jumpHost = jumpHostData[0] as unknown as SSHHost;
if (typeof jumpHost.jumpHosts === "string" && jumpHost.jumpHosts) {
try {
jumpHost.jumpHosts = JSON.parse(jumpHost.jumpHosts);
} catch (e) {
dockerConsoleLogger.error("Failed to parse jump hosts", e, {
hostId: jumpHost.id,
});
jumpHost.jumpHosts = [];
}
}
let resolvedCredentials: any = {
password: jumpHost.password,
sshKey: jumpHost.key,
keyPassword: jumpHost.keyPassword,
authType: jumpHost.authType,
};
if (jumpHost.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, jumpHost.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
resolvedCredentials = {
password: credential.password,
sshKey:
credential.private_key || credential.privateKey || credential.key,
keyPassword: credential.key_password || credential.keyPassword,
authType: credential.auth_type || credential.authType,
};
}
}
const client = new SSHClient();
const config: any = {
host: jumpHost.ip,
port: jumpHost.port || 22,
username: jumpHost.username,
tryKeyboard: true,
readyTimeout: 60000,
keepaliveInterval: 30000,
keepaliveCountMax: 120,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
};
if (
resolvedCredentials.authType === "password" &&
resolvedCredentials.password
) {
config.password = resolvedCredentials.password;
} else if (
resolvedCredentials.authType === "key" &&
resolvedCredentials.sshKey
) {
const cleanKey = resolvedCredentials.sshKey
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (resolvedCredentials.keyPassword) {
config.passphrase = resolvedCredentials.keyPassword;
}
}
if (currentClient) {
await new Promise<void>((resolve, reject) => {
currentClient!.forwardOut(
"127.0.0.1",
0,
jumpHost.ip,
jumpHost.port || 22,
(err, stream) => {
if (err) return reject(err);
config.sock = stream;
resolve();
},
);
});
}
await new Promise<void>((resolve, reject) => {
client.on("ready", () => resolve());
client.on("error", reject);
client.connect(config);
});
currentClient = client;
}
return currentClient;
}
wss.on("connection", async (ws: WebSocket, req) => {
const userId = (req as any).userId;
const sessionId = `docker-console-${Date.now()}-${Math.random()}`;
let sshSession: SSHSession | null = null;
ws.on("message", async (data) => {
try {
const message = JSON.parse(data.toString());
switch (message.type) {
case "connect": {
const { hostConfig, containerId, shell, cols, rows } =
message.data as {
hostConfig: SSHHost;
containerId: string;
shell?: string;
cols?: number;
rows?: number;
};
if (
typeof hostConfig.jumpHosts === "string" &&
hostConfig.jumpHosts
) {
try {
hostConfig.jumpHosts = JSON.parse(hostConfig.jumpHosts);
} catch (e) {
dockerConsoleLogger.error("Failed to parse jump hosts", e, {
hostId: hostConfig.id,
});
hostConfig.jumpHosts = [];
}
}
if (!hostConfig || !containerId) {
ws.send(
JSON.stringify({
type: "error",
message: "Host configuration and container ID are required",
}),
);
return;
}
if (!hostConfig.enableDocker) {
ws.send(
JSON.stringify({
type: "error",
message:
"Docker is not enabled for this host. Enable it in Host Settings.",
}),
);
return;
}
try {
let resolvedCredentials: any = {
password: hostConfig.password,
sshKey: hostConfig.key,
keyPassword: hostConfig.keyPassword,
authType: hostConfig.authType,
};
if (hostConfig.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, hostConfig.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
resolvedCredentials = {
password: credential.password,
sshKey:
credential.private_key ||
credential.privateKey ||
credential.key,
keyPassword:
credential.key_password || credential.keyPassword,
authType: credential.auth_type || credential.authType,
};
}
}
const client = new SSHClient();
const config: any = {
host: hostConfig.ip,
port: hostConfig.port || 22,
username: hostConfig.username,
tryKeyboard: true,
readyTimeout: 60000,
keepaliveInterval: 30000,
keepaliveCountMax: 120,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
};
if (
resolvedCredentials.authType === "password" &&
resolvedCredentials.password
) {
config.password = resolvedCredentials.password;
} else if (
resolvedCredentials.authType === "key" &&
resolvedCredentials.sshKey
) {
const cleanKey = resolvedCredentials.sshKey
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (resolvedCredentials.keyPassword) {
config.passphrase = resolvedCredentials.keyPassword;
}
}
if (hostConfig.jumpHosts && hostConfig.jumpHosts.length > 0) {
const jumpClient = await createJumpHostChain(
hostConfig.jumpHosts,
userId,
);
if (jumpClient) {
const stream = await new Promise<any>((resolve, reject) => {
jumpClient.forwardOut(
"127.0.0.1",
0,
hostConfig.ip,
hostConfig.port || 22,
(err, stream) => {
if (err) return reject(err);
resolve(stream);
},
);
});
config.sock = stream;
}
}
await new Promise<void>((resolve, reject) => {
client.on("ready", () => resolve());
client.on("error", reject);
client.connect(config);
});
sshSession = {
client,
stream: null,
isConnected: true,
containerId,
};
activeSessions.set(sessionId, sshSession);
let shellToUse = shell || "bash";
if (shell) {
try {
await new Promise<void>((resolve, reject) => {
client.exec(
`docker exec ${containerId} which ${shell}`,
(err, stream) => {
if (err) return reject(err);
let output = "";
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.on("close", (code: number) => {
if (code === 0 && output.trim()) {
resolve();
} else {
reject(new Error(`Shell ${shell} not available`));
}
});
stream.stderr.on("data", () => {
// Ignore stderr
});
},
);
});
} catch {
dockerConsoleLogger.warn(
`Requested shell ${shell} not found, detecting available shell`,
{
operation: "shell_validation",
sessionId,
containerId,
requestedShell: shell,
},
);
shellToUse = await detectShell(sshSession, containerId);
}
} else {
shellToUse = await detectShell(sshSession, containerId);
}
sshSession.shell = shellToUse;
const execCommand = `docker exec -it ${containerId} /bin/${shellToUse}`;
client.exec(
execCommand,
{
pty: {
term: "xterm-256color",
cols: cols || 80,
rows: rows || 24,
},
},
(err, stream) => {
if (err) {
dockerConsoleLogger.error(
"Failed to create docker exec",
err,
{
operation: "docker_exec",
sessionId,
containerId,
},
);
ws.send(
JSON.stringify({
type: "error",
message: `Failed to start console: ${err.message}`,
}),
);
return;
}
sshSession!.stream = stream;
stream.on("data", (data: Buffer) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(
JSON.stringify({
type: "output",
data: data.toString("utf8"),
}),
);
}
});
stream.stderr.on("data", (data: Buffer) => {});
stream.on("close", () => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(
JSON.stringify({
type: "disconnected",
message: "Console session ended",
}),
);
}
if (sshSession) {
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
ws.send(
JSON.stringify({
type: "connected",
data: {
shell: shellToUse,
requestedShell: shell,
shellChanged: shell && shell !== shellToUse,
},
}),
);
},
);
} catch (error) {
dockerConsoleLogger.error("Failed to connect to container", error, {
operation: "console_connect",
sessionId,
containerId: message.data.containerId,
});
ws.send(
JSON.stringify({
type: "error",
message:
error instanceof Error
? error.message
: "Failed to connect to container",
}),
);
}
break;
}
case "input": {
if (sshSession && sshSession.stream) {
sshSession.stream.write(message.data);
}
break;
}
case "resize": {
if (sshSession && sshSession.stream) {
const { cols, rows } = message.data;
sshSession.stream.setWindow(rows, cols);
}
break;
}
case "disconnect": {
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
ws.send(
JSON.stringify({
type: "disconnected",
message: "Disconnected from container",
}),
);
}
break;
}
case "ping": {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type: "pong" }));
}
break;
}
default:
dockerConsoleLogger.warn("Unknown message type", {
operation: "ws_message",
type: message.type,
});
}
} catch (error) {
dockerConsoleLogger.error("WebSocket message error", error, {
operation: "ws_message",
sessionId,
});
ws.send(
JSON.stringify({
type: "error",
message: error instanceof Error ? error.message : "An error occurred",
}),
);
}
});
ws.on("close", () => {
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
ws.on("error", (error) => {
dockerConsoleLogger.error("WebSocket error", error, {
operation: "ws_error",
sessionId,
});
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
});
process.on("SIGTERM", () => {
activeSessions.forEach((session, sessionId) => {
if (session.stream) {
session.stream.end();
}
session.client.end();
});
activeSessions.clear();
wss.close(() => {
process.exit(0);
});
});

1904
src/backend/ssh/docker.ts Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -10,6 +10,7 @@ import { fileLogger, sshLogger } from "../utils/logger.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { AuthManager } from "../utils/auth-manager.js";
import type { AuthenticatedRequest } from "../../types/index.js";
import { createSocks5Connection } from "../utils/socks5-helper.js";
function isExecutableFile(permissions: string, fileName: string): boolean {
const hasExecutePermission =
@@ -278,6 +279,7 @@ interface PendingTOTPSession {
prompts?: Array<{ prompt: string; echo: boolean }>;
totpPromptIndex?: number;
resolvedPassword?: string;
totpAttempts: number;
}
const sshSessions: Record<string, SSHSession> = {};
@@ -356,6 +358,12 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
userProvidedPassword,
forceKeyboardInteractive,
jumpHosts,
useSocks5,
socks5Host,
socks5Port,
socks5Username,
socks5Password,
socks5ProxyChain,
} = req.body;
const userId = (req as AuthenticatedRequest).userId;
@@ -382,6 +390,15 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
if (sshSessions[sessionId]?.isConnected) {
cleanupSession(sessionId);
}
// Clean up any stale pending TOTP sessions
if (pendingTOTPSessions[sessionId]) {
try {
pendingTOTPSessions[sessionId].client.end();
} catch {}
delete pendingTOTPSessions[sessionId];
}
const client = new SSHClient();
let resolvedCredentials = { password, sshKey, keyPassword, authType };
@@ -545,9 +562,7 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
.json({ error: "Password required for password authentication" });
}
if (!forceKeyboardInteractive) {
config.password = resolvedCredentials.password;
}
config.password = resolvedCredentials.password;
} else if (resolvedCredentials.authType === "none") {
} else {
fileLogger.warn(
@@ -713,6 +728,7 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
prompts,
totpPromptIndex,
resolvedPassword: resolvedCredentials.password,
totpAttempts: 0,
};
res.json({
@@ -785,6 +801,7 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
prompts,
totpPromptIndex: passwordPromptIndex,
resolvedPassword: resolvedCredentials.password,
totpAttempts: 0,
};
res.json({
@@ -808,7 +825,47 @@ app.post("/ssh/file_manager/ssh/connect", async (req, res) => {
},
);
if (jumpHosts && jumpHosts.length > 0 && userId) {
if (
useSocks5 &&
(socks5Host || (socks5ProxyChain && (socks5ProxyChain as any).length > 0))
) {
try {
const socks5Socket = await createSocks5Connection(ip, port, {
useSocks5,
socks5Host,
socks5Port,
socks5Username,
socks5Password,
socks5ProxyChain: socks5ProxyChain as any,
});
if (socks5Socket) {
config.sock = socks5Socket;
client.connect(config);
return;
} else {
fileLogger.error("SOCKS5 socket is null for SFTP", undefined, {
operation: "sftp_socks5_socket_null",
sessionId,
});
}
} catch (socks5Error) {
fileLogger.error("SOCKS5 connection failed", socks5Error, {
operation: "socks5_connect",
sessionId,
hostId,
proxyHost: socks5Host,
proxyPort: socks5Port || 1080,
});
return res.status(500).json({
error:
"SOCKS5 proxy connection failed: " +
(socks5Error instanceof Error
? socks5Error.message
: "Unknown error"),
});
}
} else if (jumpHosts && jumpHosts.length > 0 && userId) {
try {
const jumpClient = await createJumpHostChain(jumpHosts, userId);
@@ -891,9 +948,7 @@ app.post("/ssh/file_manager/ssh/connect-totp", async (req, res) => {
delete pendingTOTPSessions[sessionId];
try {
session.client.end();
} catch (error) {
sshLogger.debug("Operation failed, continuing", { error });
}
} catch (error) {}
fileLogger.warn("TOTP session timeout before code submission", {
operation: "file_totp_verify",
sessionId,
@@ -1385,7 +1440,7 @@ app.post("/ssh/file_manager/ssh/writeFile", async (req, res) => {
let fileBuffer;
try {
if (typeof content === "string") {
fileBuffer = Buffer.from(content, "utf8");
fileBuffer = Buffer.from(content, "base64");
} else if (Buffer.isBuffer(content)) {
fileBuffer = content;
} else {
@@ -1461,7 +1516,22 @@ app.post("/ssh/file_manager/ssh/writeFile", async (req, res) => {
const tryFallbackMethod = () => {
try {
const base64Content = Buffer.from(content, "utf8").toString("base64");
let contentBuffer: Buffer;
if (typeof content === "string") {
try {
contentBuffer = Buffer.from(content, "base64");
if (contentBuffer.toString("base64") !== content) {
contentBuffer = Buffer.from(content, "utf8");
}
} catch {
contentBuffer = Buffer.from(content, "utf8");
}
} else if (Buffer.isBuffer(content)) {
contentBuffer = content;
} else {
contentBuffer = Buffer.from(content);
}
const base64Content = contentBuffer.toString("base64");
const escapedPath = filePath.replace(/'/g, "'\"'\"'");
const writeCommand = `echo '${base64Content}' | base64 -d > '${escapedPath}' && echo "SUCCESS"`;
@@ -1579,7 +1649,7 @@ app.post("/ssh/file_manager/ssh/uploadFile", async (req, res) => {
let fileBuffer;
try {
if (typeof content === "string") {
fileBuffer = Buffer.from(content, "utf8");
fileBuffer = Buffer.from(content, "base64");
} else if (Buffer.isBuffer(content)) {
fileBuffer = content;
} else {
@@ -1662,7 +1732,22 @@ app.post("/ssh/file_manager/ssh/uploadFile", async (req, res) => {
const tryFallbackMethod = () => {
try {
const base64Content = Buffer.from(content, "utf8").toString("base64");
let contentBuffer: Buffer;
if (typeof content === "string") {
try {
contentBuffer = Buffer.from(content, "base64");
if (contentBuffer.toString("base64") !== content) {
contentBuffer = Buffer.from(content, "utf8");
}
} catch {
contentBuffer = Buffer.from(content, "utf8");
}
} else if (Buffer.isBuffer(content)) {
contentBuffer = content;
} else {
contentBuffer = Buffer.from(content);
}
const base64Content = contentBuffer.toString("base64");
const chunkSize = 1000000;
const chunks = [];
@@ -2940,21 +3025,10 @@ app.post("/ssh/file_manager/ssh/extractArchive", async (req, res) => {
let errorOutput = "";
stream.on("data", (data: Buffer) => {
fileLogger.debug("Extract stdout", {
operation: "extract_archive",
sessionId,
output: data.toString(),
});
});
stream.on("data", (data: Buffer) => {});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
fileLogger.debug("Extract stderr", {
operation: "extract_archive",
sessionId,
error: data.toString(),
});
});
stream.on("close", (code: number) => {
@@ -3132,21 +3206,10 @@ app.post("/ssh/file_manager/ssh/compressFiles", async (req, res) => {
let errorOutput = "";
stream.on("data", (data: Buffer) => {
fileLogger.debug("Compress stdout", {
operation: "compress_files",
sessionId,
output: data.toString(),
});
});
stream.on("data", (data: Buffer) => {});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
fileLogger.debug("Compress stderr", {
operation: "compress_files",
sessionId,
error: data.toString(),
});
});
stream.on("close", (code: number) => {

File diff suppressed because it is too large Load Diff

View File

@@ -14,6 +14,7 @@ import { sshLogger } from "../utils/logger.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { AuthManager } from "../utils/auth-manager.js";
import { UserCrypto } from "../utils/user-crypto.js";
import { createSocks5Connection } from "../utils/socks5-helper.js";
interface ConnectToHostData {
cols: number;
@@ -32,6 +33,12 @@ interface ConnectToHostData {
userId?: string;
forceKeyboardInteractive?: boolean;
jumpHosts?: Array<{ hostId: number }>;
useSocks5?: boolean;
socks5Host?: string;
socks5Port?: number;
socks5Username?: string;
socks5Password?: string;
socks5ProxyChain?: unknown;
};
initialPath?: string;
executeCommand?: string;
@@ -130,10 +137,12 @@ async function createJumpHostChain(
const clients: Client[] = [];
try {
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostConfig = await resolveJumpHost(jumpHosts[i].hostId, userId);
const jumpHostConfigs = await Promise.all(
jumpHosts.map((jh) => resolveJumpHost(jh.hostId, userId)),
);
if (!jumpHostConfig) {
for (let i = 0; i < jumpHostConfigs.length; i++) {
if (!jumpHostConfigs[i]) {
sshLogger.error(`Jump host ${i + 1} not found`, undefined, {
operation: "jump_host_chain",
hostId: jumpHosts[i].hostId,
@@ -141,6 +150,10 @@ async function createJumpHostChain(
clients.forEach((c) => c.end());
return null;
}
}
for (let i = 0; i < jumpHostConfigs.length; i++) {
const jumpHostConfig = jumpHostConfigs[i];
const jumpClient = new Client();
clients.push(jumpClient);
@@ -316,9 +329,10 @@ wss.on("connection", async (ws: WebSocket, req) => {
let sshConn: Client | null = null;
let sshStream: ClientChannel | null = null;
let pingInterval: NodeJS.Timeout | null = null;
let keyboardInteractiveFinish: ((responses: string[]) => void) | null = null;
let totpPromptSent = false;
let totpAttempts = 0;
let totpTimeout: NodeJS.Timeout | null = null;
let isKeyboardInteractive = false;
let keyboardInteractiveResponded = false;
let isConnecting = false;
@@ -435,9 +449,15 @@ wss.on("connection", async (ws: WebSocket, req) => {
case "totp_response": {
const totpData = data as TOTPResponseData;
if (keyboardInteractiveFinish && totpData?.code) {
if (totpTimeout) {
clearTimeout(totpTimeout);
totpTimeout = null;
}
const totpCode = totpData.code;
totpAttempts++;
keyboardInteractiveFinish([totpCode]);
keyboardInteractiveFinish = null;
totpPromptSent = false;
} else {
sshLogger.warn("TOTP response received but no callback available", {
operation: "totp_response_error",
@@ -458,6 +478,10 @@ wss.on("connection", async (ws: WebSocket, req) => {
case "password_response": {
const passwordData = data as TOTPResponseData;
if (keyboardInteractiveFinish && passwordData?.code) {
if (totpTimeout) {
clearTimeout(totpTimeout);
totpTimeout = null;
}
const password = passwordData.code;
keyboardInteractiveFinish([password]);
keyboardInteractiveFinish = null;
@@ -597,6 +621,13 @@ wss.on("connection", async (ws: WebSocket, req) => {
isConnecting,
isConnected,
});
ws.send(
JSON.stringify({
type: "error",
message: "Connection already in progress",
code: "DUPLICATE_CONNECTION",
}),
);
return;
}
@@ -617,7 +648,7 @@ wss.on("connection", async (ws: WebSocket, req) => {
);
cleanupSSH(connectionTimeout);
}
}, 120000);
}, 30000);
let resolvedCredentials = { password, key, keyPassword, keyType, authType };
let authMethodNotAvailable = false;
@@ -802,8 +833,6 @@ wss.on("connection", async (ws: WebSocket, req) => {
);
});
setupPingInterval();
if (initialPath && initialPath.trim() !== "") {
const cdCommand = `cd "${initialPath.replace(/"/g, '\\"')}" && pwd\n`;
stream.write(cdCommand);
@@ -987,6 +1016,25 @@ wss.on("connection", async (ws: WebSocket, req) => {
finish(responses);
};
totpTimeout = setTimeout(() => {
if (keyboardInteractiveFinish) {
keyboardInteractiveFinish = null;
totpPromptSent = false;
sshLogger.warn("TOTP prompt timeout", {
operation: "totp_timeout",
hostId: id,
});
ws.send(
JSON.stringify({
type: "error",
message: "TOTP verification timeout. Please reconnect.",
}),
);
cleanupSSH(connectionTimeout);
}
}, 180000);
ws.send(
JSON.stringify({
type: "totp_required",
@@ -1021,6 +1069,24 @@ wss.on("connection", async (ws: WebSocket, req) => {
finish(responses);
};
totpTimeout = setTimeout(() => {
if (keyboardInteractiveFinish) {
keyboardInteractiveFinish = null;
keyboardInteractiveResponded = false;
sshLogger.warn("Password prompt timeout", {
operation: "password_timeout",
hostId: id,
});
ws.send(
JSON.stringify({
type: "error",
message: "Password verification timeout. Please reconnect.",
}),
);
cleanupSSH(connectionTimeout);
}
}, 180000);
ws.send(
JSON.stringify({
type: "password_required",
@@ -1049,10 +1115,10 @@ wss.on("connection", async (ws: WebSocket, req) => {
tryKeyboard: true,
keepaliveInterval: 30000,
keepaliveCountMax: 3,
readyTimeout: 120000,
readyTimeout: 30000,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
timeout: 120000,
timeout: 30000,
env: {
TERM: "xterm-256color",
LANG: "en_US.UTF-8",
@@ -1128,9 +1194,7 @@ wss.on("connection", async (ws: WebSocket, req) => {
return;
}
if (!hostConfig.forceKeyboardInteractive) {
connectConfig.password = resolvedCredentials.password;
}
connectConfig.password = resolvedCredentials.password;
} else if (
resolvedCredentials.authType === "key" &&
resolvedCredentials.key
@@ -1183,6 +1247,49 @@ wss.on("connection", async (ws: WebSocket, req) => {
return;
}
if (
hostConfig.useSocks5 &&
(hostConfig.socks5Host ||
(hostConfig.socks5ProxyChain &&
(hostConfig.socks5ProxyChain as any).length > 0))
) {
try {
const socks5Socket = await createSocks5Connection(ip, port, {
useSocks5: hostConfig.useSocks5,
socks5Host: hostConfig.socks5Host,
socks5Port: hostConfig.socks5Port,
socks5Username: hostConfig.socks5Username,
socks5Password: hostConfig.socks5Password,
socks5ProxyChain: hostConfig.socks5ProxyChain as any,
});
if (socks5Socket) {
connectConfig.sock = socks5Socket;
sshConn.connect(connectConfig);
return;
}
} catch (socks5Error) {
sshLogger.error("SOCKS5 connection failed", socks5Error, {
operation: "socks5_connect",
hostId: id,
proxyHost: hostConfig.socks5Host,
proxyPort: hostConfig.socks5Port || 1080,
});
ws.send(
JSON.stringify({
type: "error",
message:
"SOCKS5 proxy connection failed: " +
(socks5Error instanceof Error
? socks5Error.message
: "Unknown error"),
}),
);
cleanupSSH(connectionTimeout);
return;
}
}
if (
hostConfig.jumpHosts &&
hostConfig.jumpHosts.length > 0 &&
@@ -1279,9 +1386,9 @@ wss.on("connection", async (ws: WebSocket, req) => {
clearTimeout(timeoutId);
}
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
if (totpTimeout) {
clearTimeout(totpTimeout);
totpTimeout = null;
}
if (sshStream) {
@@ -1309,35 +1416,21 @@ wss.on("connection", async (ws: WebSocket, req) => {
}
totpPromptSent = false;
totpAttempts = 0;
isKeyboardInteractive = false;
keyboardInteractiveResponded = false;
keyboardInteractiveFinish = null;
isConnecting = false;
isConnected = false;
setTimeout(() => {
isCleaningUp = false;
}, 100);
isCleaningUp = false;
}
function setupPingInterval() {
pingInterval = setInterval(() => {
if (sshConn && sshStream) {
try {
sshStream.write("\x00");
} catch (e: unknown) {
sshLogger.error(
"SSH keepalive failed: " +
(e instanceof Error ? e.message : "Unknown error"),
);
cleanupSSH();
}
} else if (!sshConn || !sshStream) {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
}
}, 30000);
}
// Note: PTY-level keepalive (writing \x00 to the stream) was removed.
// It was causing ^@ characters to appear in terminals with echoctl enabled.
// SSH-level keepalive is configured via connectConfig (keepaliveInterval,
// keepaliveCountMax, tcpKeepAlive), which handles connection health monitoring
// without producing visible output on the terminal.
//
// See: https://github.com/Termix-SSH/Support/issues/232
// See: https://github.com/Termix-SSH/Support/issues/309
});

View File

@@ -1,4 +1,4 @@
import express from "express";
import express, { type Response } from "express";
import cors from "cors";
import cookieParser from "cookie-parser";
import { Client } from "ssh2";
@@ -13,12 +13,16 @@ import type {
TunnelStatus,
VerificationData,
ErrorType,
AuthenticatedRequest,
} from "../../types/index.js";
import { CONNECTION_STATES } from "../../types/index.js";
import { tunnelLogger, sshLogger } from "../utils/logger.js";
import { SystemCrypto } from "../utils/system-crypto.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { DataCrypto } from "../utils/data-crypto.js";
import { createSocks5Connection } from "../utils/socks5-helper.js";
import { AuthManager } from "../utils/auth-manager.js";
import { PermissionManager } from "../utils/permission-manager.js";
const app = express();
app.use(
@@ -63,6 +67,10 @@ app.use(
app.use(cookieParser());
app.use(express.json());
const authManager = AuthManager.getInstance();
const permissionManager = PermissionManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const activeTunnels = new Map<string, Client>();
const retryCounters = new Map<string, number>();
const connectionStatus = new Map<string, TunnelStatus>();
@@ -77,6 +85,7 @@ const tunnelConnecting = new Set<string>();
const tunnelConfigs = new Map<string, TunnelConfig>();
const activeTunnelProcesses = new Map<string, ChildProcess>();
const pendingTunnelOperations = new Map<string, Promise<void>>();
function broadcastTunnelStatus(tunnelName: string, status: TunnelStatus): void {
if (
@@ -154,10 +163,75 @@ function getTunnelMarker(tunnelName: string) {
return `TUNNEL_MARKER_${tunnelName.replace(/[^a-zA-Z0-9]/g, "_")}`;
}
function cleanupTunnelResources(
function normalizeTunnelName(
hostId: number,
tunnelIndex: number,
displayName: string,
sourcePort: number,
endpointHost: string,
endpointPort: number,
): string {
return `${hostId}::${tunnelIndex}::${displayName}::${sourcePort}::${endpointHost}::${endpointPort}`;
}
function parseTunnelName(tunnelName: string): {
hostId?: number;
tunnelIndex?: number;
displayName: string;
sourcePort: string;
endpointHost: string;
endpointPort: string;
isLegacyFormat: boolean;
} {
const parts = tunnelName.split("::");
if (parts.length === 6) {
return {
hostId: parseInt(parts[0]),
tunnelIndex: parseInt(parts[1]),
displayName: parts[2],
sourcePort: parts[3],
endpointHost: parts[4],
endpointPort: parts[5],
isLegacyFormat: false,
};
}
tunnelLogger.warn(`Legacy tunnel name format: ${tunnelName}`);
const legacyParts = tunnelName.split("_");
return {
displayName: legacyParts[0] || "unknown",
sourcePort: legacyParts[legacyParts.length - 3] || "0",
endpointHost: legacyParts[legacyParts.length - 2] || "unknown",
endpointPort: legacyParts[legacyParts.length - 1] || "0",
isLegacyFormat: true,
};
}
function validateTunnelConfig(
tunnelName: string,
tunnelConfig: TunnelConfig,
): boolean {
const parsed = parseTunnelName(tunnelName);
if (parsed.isLegacyFormat) {
return true;
}
return (
parsed.hostId === tunnelConfig.sourceHostId &&
parsed.tunnelIndex === tunnelConfig.tunnelIndex &&
String(parsed.sourcePort) === String(tunnelConfig.sourcePort) &&
parsed.endpointHost === tunnelConfig.endpointHost &&
String(parsed.endpointPort) === String(tunnelConfig.endpointPort)
);
}
async function cleanupTunnelResources(
tunnelName: string,
forceCleanup = false,
): void {
): Promise<void> {
if (cleanupInProgress.has(tunnelName)) {
return;
}
@@ -170,13 +244,16 @@ function cleanupTunnelResources(
const tunnelConfig = tunnelConfigs.get(tunnelName);
if (tunnelConfig) {
killRemoteTunnelByMarker(tunnelConfig, tunnelName, (err) => {
cleanupInProgress.delete(tunnelName);
if (err) {
tunnelLogger.error(
`Failed to kill remote tunnel for '${tunnelName}': ${err.message}`,
);
}
await new Promise<void>((resolve) => {
killRemoteTunnelByMarker(tunnelConfig, tunnelName, (err) => {
cleanupInProgress.delete(tunnelName);
if (err) {
tunnelLogger.error(
`Failed to kill remote tunnel for '${tunnelName}': ${err.message}`,
);
}
resolve();
});
});
} else {
cleanupInProgress.delete(tunnelName);
@@ -272,11 +349,11 @@ function resetRetryState(tunnelName: string): void {
});
}
function handleDisconnect(
async function handleDisconnect(
tunnelName: string,
tunnelConfig: TunnelConfig | null,
shouldRetry = true,
): void {
): Promise<void> {
if (tunnelVerifications.has(tunnelName)) {
try {
const verification = tunnelVerifications.get(tunnelName);
@@ -286,7 +363,11 @@ function handleDisconnect(
tunnelVerifications.delete(tunnelName);
}
cleanupTunnelResources(tunnelName);
while (cleanupInProgress.has(tunnelName)) {
await new Promise((resolve) => setTimeout(resolve, 100));
}
await cleanupTunnelResources(tunnelName);
if (manualDisconnects.has(tunnelName)) {
resetRetryState(tunnelName);
@@ -490,43 +571,76 @@ async function connectSSHTunnel(
authMethod: tunnelConfig.sourceAuthMethod,
};
if (tunnelConfig.sourceCredentialId && tunnelConfig.sourceUserId) {
try {
const userDataKey = DataCrypto.getUserDataKey(tunnelConfig.sourceUserId);
if (userDataKey) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, tunnelConfig.sourceCredentialId),
eq(sshCredentials.userId, tunnelConfig.sourceUserId),
),
),
"ssh_credentials",
tunnelConfig.sourceUserId,
);
const effectiveUserId =
tunnelConfig.requestingUserId || tunnelConfig.sourceUserId;
if (credentials.length > 0) {
const credential = credentials[0];
resolvedSourceCredentials = {
password: credential.password as string | undefined,
sshKey: (credential.private_key ||
credential.privateKey ||
credential.key) as string | undefined,
keyPassword: (credential.key_password || credential.keyPassword) as
| string
| undefined,
keyType: (credential.key_type || credential.keyType) as
| string
| undefined,
authMethod: (credential.auth_type || credential.authType) as string,
};
if (tunnelConfig.sourceCredentialId && effectiveUserId) {
try {
if (
tunnelConfig.requestingUserId &&
tunnelConfig.requestingUserId !== tunnelConfig.sourceUserId
) {
const { SharedCredentialManager } =
await import("../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
if (tunnelConfig.sourceHostId) {
const sharedCred = await sharedCredManager.getSharedCredentialForUser(
tunnelConfig.sourceHostId,
tunnelConfig.requestingUserId,
);
if (sharedCred) {
resolvedSourceCredentials = {
password: sharedCred.password,
sshKey: sharedCred.key,
keyPassword: sharedCred.keyPassword,
keyType: sharedCred.keyType,
authMethod: sharedCred.authType,
};
} else {
const errorMessage = `Cannot connect tunnel '${tunnelName}': shared credentials not available`;
tunnelLogger.error(errorMessage);
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.FAILED,
reason: errorMessage,
});
return;
}
}
} else {
const userDataKey = DataCrypto.getUserDataKey(effectiveUserId);
if (userDataKey) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(eq(sshCredentials.id, tunnelConfig.sourceCredentialId)),
"ssh_credentials",
effectiveUserId,
);
if (credentials.length > 0) {
const credential = credentials[0];
resolvedSourceCredentials = {
password: credential.password as string | undefined,
sshKey: (credential.private_key ||
credential.privateKey ||
credential.key) as string | undefined,
keyPassword: (credential.key_password ||
credential.keyPassword) as string | undefined,
keyType: (credential.key_type || credential.keyType) as
| string
| undefined,
authMethod: (credential.auth_type ||
credential.authType) as string,
};
}
}
}
} catch (error) {
tunnelLogger.warn("Failed to resolve source credentials from database", {
tunnelLogger.warn("Failed to resolve source credentials", {
operation: "tunnel_connect",
tunnelName,
credentialId: tunnelConfig.sourceCredentialId,
@@ -581,12 +695,7 @@ async function connectSSHTunnel(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, tunnelConfig.endpointCredentialId),
eq(sshCredentials.userId, tunnelConfig.endpointUserId),
),
),
.where(eq(sshCredentials.id, tunnelConfig.endpointCredentialId)),
"ssh_credentials",
tunnelConfig.endpointUserId,
);
@@ -1016,6 +1125,51 @@ async function connectSSHTunnel(
});
}
if (
tunnelConfig.useSocks5 &&
(tunnelConfig.socks5Host ||
(tunnelConfig.socks5ProxyChain &&
tunnelConfig.socks5ProxyChain.length > 0))
) {
try {
const socks5Socket = await createSocks5Connection(
tunnelConfig.sourceIP,
tunnelConfig.sourceSSHPort,
{
useSocks5: tunnelConfig.useSocks5,
socks5Host: tunnelConfig.socks5Host,
socks5Port: tunnelConfig.socks5Port,
socks5Username: tunnelConfig.socks5Username,
socks5Password: tunnelConfig.socks5Password,
socks5ProxyChain: tunnelConfig.socks5ProxyChain,
},
);
if (socks5Socket) {
connOptions.sock = socks5Socket;
conn.connect(connOptions);
return;
}
} catch (socks5Error) {
tunnelLogger.error("SOCKS5 connection failed for tunnel", socks5Error, {
operation: "socks5_connect",
tunnelName,
proxyHost: tunnelConfig.socks5Host,
proxyPort: tunnelConfig.socks5Port || 1080,
});
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.FAILED,
reason:
"SOCKS5 proxy connection failed: " +
(socks5Error instanceof Error
? socks5Error.message
: "Unknown error"),
});
return;
}
}
conn.connect(connOptions);
}
@@ -1042,12 +1196,7 @@ async function killRemoteTunnelByMarker(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, tunnelConfig.sourceCredentialId),
eq(sshCredentials.userId, tunnelConfig.sourceUserId),
),
),
.where(eq(sshCredentials.id, tunnelConfig.sourceCredentialId)),
"ssh_credentials",
tunnelConfig.sourceUserId,
);
@@ -1248,7 +1397,57 @@ async function killRemoteTunnelByMarker(
callback(err);
});
conn.connect(connOptions);
if (
tunnelConfig.useSocks5 &&
(tunnelConfig.socks5Host ||
(tunnelConfig.socks5ProxyChain &&
tunnelConfig.socks5ProxyChain.length > 0))
) {
(async () => {
try {
const socks5Socket = await createSocks5Connection(
tunnelConfig.sourceIP,
tunnelConfig.sourceSSHPort,
{
useSocks5: tunnelConfig.useSocks5,
socks5Host: tunnelConfig.socks5Host,
socks5Port: tunnelConfig.socks5Port,
socks5Username: tunnelConfig.socks5Username,
socks5Password: tunnelConfig.socks5Password,
socks5ProxyChain: tunnelConfig.socks5ProxyChain,
},
);
if (socks5Socket) {
connOptions.sock = socks5Socket;
conn.connect(connOptions);
} else {
callback(new Error("Failed to create SOCKS5 connection"));
}
} catch (socks5Error) {
tunnelLogger.error(
"SOCKS5 connection failed for killing tunnel",
socks5Error,
{
operation: "socks5_connect_kill",
tunnelName,
proxyHost: tunnelConfig.socks5Host,
proxyPort: tunnelConfig.socks5Port || 1080,
},
);
callback(
new Error(
"SOCKS5 proxy connection failed: " +
(socks5Error instanceof Error
? socks5Error.message
: "Unknown error"),
),
);
}
})();
} else {
conn.connect(connOptions);
}
}
app.get("/ssh/tunnel/status", (req, res) => {
@@ -1266,103 +1465,291 @@ app.get("/ssh/tunnel/status/:tunnelName", (req, res) => {
res.json({ name: tunnelName, status });
});
app.post("/ssh/tunnel/connect", (req, res) => {
const tunnelConfig: TunnelConfig = req.body;
app.post(
"/ssh/tunnel/connect",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const tunnelConfig: TunnelConfig = req.body;
const userId = req.userId;
if (!tunnelConfig || !tunnelConfig.name) {
return res.status(400).json({ error: "Invalid tunnel configuration" });
}
if (!userId) {
return res.status(401).json({ error: "Authentication required" });
}
const tunnelName = tunnelConfig.name;
if (!tunnelConfig || !tunnelConfig.name) {
return res.status(400).json({ error: "Invalid tunnel configuration" });
}
cleanupTunnelResources(tunnelName);
const tunnelName = tunnelConfig.name;
manualDisconnects.delete(tunnelName);
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
try {
if (!validateTunnelConfig(tunnelName, tunnelConfig)) {
tunnelLogger.error(`Tunnel config validation failed`, {
operation: "tunnel_connect",
tunnelName,
configHostId: tunnelConfig.sourceHostId,
configTunnelIndex: tunnelConfig.tunnelIndex,
});
return res.status(400).json({
error: "Tunnel configuration does not match tunnel name",
});
}
tunnelConfigs.set(tunnelName, tunnelConfig);
if (tunnelConfig.sourceHostId) {
const accessInfo = await permissionManager.canAccessHost(
userId,
tunnelConfig.sourceHostId,
"read",
);
connectSSHTunnel(tunnelConfig, 0).catch((error) => {
tunnelLogger.error(
`Failed to connect tunnel ${tunnelConfig.name}: ${error instanceof Error ? error.message : "Unknown error"}`,
);
});
if (!accessInfo.hasAccess) {
tunnelLogger.warn("User attempted tunnel connect without access", {
operation: "tunnel_connect_unauthorized",
userId,
hostId: tunnelConfig.sourceHostId,
tunnelName,
});
return res.status(403).json({ error: "Access denied to this host" });
}
res.json({ message: "Connection request received", tunnelName });
});
if (accessInfo.isShared && !accessInfo.isOwner) {
tunnelConfig.requestingUserId = userId;
}
}
app.post("/ssh/tunnel/disconnect", (req, res) => {
const { tunnelName } = req.body;
if (pendingTunnelOperations.has(tunnelName)) {
try {
await pendingTunnelOperations.get(tunnelName);
} catch (error) {
tunnelLogger.warn(`Previous tunnel operation failed`, { tunnelName });
}
}
if (!tunnelName) {
return res.status(400).json({ error: "Tunnel name required" });
}
const operation = (async () => {
manualDisconnects.delete(tunnelName);
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
manualDisconnects.add(tunnelName);
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
await cleanupTunnelResources(tunnelName);
if (activeRetryTimers.has(tunnelName)) {
clearTimeout(activeRetryTimers.get(tunnelName)!);
activeRetryTimers.delete(tunnelName);
}
if (tunnelConfigs.has(tunnelName)) {
const existingConfig = tunnelConfigs.get(tunnelName);
if (
existingConfig &&
(existingConfig.sourceHostId !== tunnelConfig.sourceHostId ||
existingConfig.tunnelIndex !== tunnelConfig.tunnelIndex)
) {
throw new Error(`Tunnel name collision detected: ${tunnelName}`);
}
}
cleanupTunnelResources(tunnelName, true);
if (!tunnelConfig.endpointIP || !tunnelConfig.endpointUsername) {
try {
const systemCrypto = SystemCrypto.getInstance();
const internalAuthToken = await systemCrypto.getInternalAuthToken();
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.DISCONNECTED,
manualDisconnect: true,
});
const allHostsResponse = await axios.get(
"http://localhost:30001/ssh/db/host/internal/all",
{
headers: {
"Content-Type": "application/json",
"X-Internal-Auth-Token": internalAuthToken,
},
},
);
const tunnelConfig = tunnelConfigs.get(tunnelName) || null;
handleDisconnect(tunnelName, tunnelConfig, false);
const allHosts: SSHHost[] = allHostsResponse.data || [];
const endpointHost = allHosts.find(
(h) =>
h.name === tunnelConfig.endpointHost ||
`${h.username}@${h.ip}` === tunnelConfig.endpointHost,
);
setTimeout(() => {
manualDisconnects.delete(tunnelName);
}, 5000);
if (!endpointHost) {
throw new Error(
`Endpoint host '${tunnelConfig.endpointHost}' not found in database`,
);
}
res.json({ message: "Disconnect request received", tunnelName });
});
tunnelConfig.endpointIP = endpointHost.ip;
tunnelConfig.endpointSSHPort = endpointHost.port;
tunnelConfig.endpointUsername = endpointHost.username;
tunnelConfig.endpointPassword = endpointHost.password;
tunnelConfig.endpointAuthMethod = endpointHost.authType;
tunnelConfig.endpointSSHKey = endpointHost.key;
tunnelConfig.endpointKeyPassword = endpointHost.keyPassword;
tunnelConfig.endpointKeyType = endpointHost.keyType;
tunnelConfig.endpointCredentialId = endpointHost.credentialId;
tunnelConfig.endpointUserId = endpointHost.userId;
} catch (resolveError) {
tunnelLogger.error(
"Failed to resolve endpoint host",
resolveError,
{
operation: "tunnel_connect_resolve_endpoint_failed",
tunnelName,
endpointHost: tunnelConfig.endpointHost,
},
);
throw new Error(
`Failed to resolve endpoint host: ${resolveError instanceof Error ? resolveError.message : "Unknown error"}`,
);
}
}
app.post("/ssh/tunnel/cancel", (req, res) => {
const { tunnelName } = req.body;
tunnelConfigs.set(tunnelName, tunnelConfig);
await connectSSHTunnel(tunnelConfig, 0);
})();
if (!tunnelName) {
return res.status(400).json({ error: "Tunnel name required" });
}
pendingTunnelOperations.set(tunnelName, operation);
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
res.json({ message: "Connection request received", tunnelName });
if (activeRetryTimers.has(tunnelName)) {
clearTimeout(activeRetryTimers.get(tunnelName)!);
activeRetryTimers.delete(tunnelName);
}
operation.finally(() => {
pendingTunnelOperations.delete(tunnelName);
});
} catch (error) {
tunnelLogger.error("Failed to process tunnel connect", error, {
operation: "tunnel_connect",
tunnelName,
userId,
});
res.status(500).json({ error: "Failed to connect tunnel" });
}
},
);
if (countdownIntervals.has(tunnelName)) {
clearInterval(countdownIntervals.get(tunnelName)!);
countdownIntervals.delete(tunnelName);
}
app.post(
"/ssh/tunnel/disconnect",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const { tunnelName } = req.body;
const userId = req.userId;
cleanupTunnelResources(tunnelName, true);
if (!userId) {
return res.status(401).json({ error: "Authentication required" });
}
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.DISCONNECTED,
manualDisconnect: true,
});
if (!tunnelName) {
return res.status(400).json({ error: "Tunnel name required" });
}
const tunnelConfig = tunnelConfigs.get(tunnelName) || null;
handleDisconnect(tunnelName, tunnelConfig, false);
try {
const config = tunnelConfigs.get(tunnelName);
if (config && config.sourceHostId) {
const accessInfo = await permissionManager.canAccessHost(
userId,
config.sourceHostId,
"read",
);
if (!accessInfo.hasAccess) {
return res.status(403).json({ error: "Access denied" });
}
}
setTimeout(() => {
manualDisconnects.delete(tunnelName);
}, 5000);
manualDisconnects.add(tunnelName);
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
res.json({ message: "Cancel request received", tunnelName });
});
if (activeRetryTimers.has(tunnelName)) {
clearTimeout(activeRetryTimers.get(tunnelName)!);
activeRetryTimers.delete(tunnelName);
}
await cleanupTunnelResources(tunnelName, true);
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.DISCONNECTED,
manualDisconnect: true,
});
const tunnelConfig = tunnelConfigs.get(tunnelName) || null;
handleDisconnect(tunnelName, tunnelConfig, false);
setTimeout(() => {
manualDisconnects.delete(tunnelName);
}, 5000);
res.json({ message: "Disconnect request received", tunnelName });
} catch (error) {
tunnelLogger.error("Failed to disconnect tunnel", error, {
operation: "tunnel_disconnect",
tunnelName,
userId,
});
res.status(500).json({ error: "Failed to disconnect tunnel" });
}
},
);
app.post(
"/ssh/tunnel/cancel",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const { tunnelName } = req.body;
const userId = req.userId;
if (!userId) {
return res.status(401).json({ error: "Authentication required" });
}
if (!tunnelName) {
return res.status(400).json({ error: "Tunnel name required" });
}
try {
const config = tunnelConfigs.get(tunnelName);
if (config && config.sourceHostId) {
const accessInfo = await permissionManager.canAccessHost(
userId,
config.sourceHostId,
"read",
);
if (!accessInfo.hasAccess) {
return res.status(403).json({ error: "Access denied" });
}
}
retryCounters.delete(tunnelName);
retryExhaustedTunnels.delete(tunnelName);
if (activeRetryTimers.has(tunnelName)) {
clearTimeout(activeRetryTimers.get(tunnelName)!);
activeRetryTimers.delete(tunnelName);
}
if (countdownIntervals.has(tunnelName)) {
clearInterval(countdownIntervals.get(tunnelName)!);
countdownIntervals.delete(tunnelName);
}
await cleanupTunnelResources(tunnelName, true);
broadcastTunnelStatus(tunnelName, {
connected: false,
status: CONNECTION_STATES.DISCONNECTED,
manualDisconnect: true,
});
const tunnelConfig = tunnelConfigs.get(tunnelName) || null;
handleDisconnect(tunnelName, tunnelConfig, false);
setTimeout(() => {
manualDisconnects.delete(tunnelName);
}, 5000);
res.json({ message: "Cancel request received", tunnelName });
} catch (error) {
tunnelLogger.error("Failed to cancel tunnel retry", error, {
operation: "tunnel_cancel",
tunnelName,
userId,
});
res.status(500).json({ error: "Failed to cancel tunnel retry" });
}
},
);
async function initializeAutoStartTunnels(): Promise<void> {
try {
@@ -1408,12 +1795,19 @@ async function initializeAutoStartTunnels(): Promise<void> {
);
if (endpointHost) {
const tunnelIndex =
host.tunnelConnections.indexOf(tunnelConnection);
const tunnelConfig: TunnelConfig = {
name: `${host.name || `${host.username}@${host.ip}`}_${
tunnelConnection.sourcePort
}_${tunnelConnection.endpointHost}_${
tunnelConnection.endpointPort
}`,
name: normalizeTunnelName(
host.id,
tunnelIndex,
host.name || `${host.username}@${host.ip}`,
tunnelConnection.sourcePort,
tunnelConnection.endpointHost,
tunnelConnection.endpointPort,
),
sourceHostId: host.id,
tunnelIndex: tunnelIndex,
hostName: host.name || `${host.username}@${host.ip}`,
sourceIP: host.ip,
sourceSSHPort: host.port,
@@ -1429,6 +1823,7 @@ async function initializeAutoStartTunnels(): Promise<void> {
endpointIP: endpointHost.ip,
endpointSSHPort: endpointHost.port,
endpointUsername: endpointHost.username,
endpointHost: tunnelConnection.endpointHost,
endpointPassword:
tunnelConnection.endpointPassword ||
endpointHost.autostartPassword ||
@@ -1453,6 +1848,11 @@ async function initializeAutoStartTunnels(): Promise<void> {
retryInterval: tunnelConnection.retryInterval * 1000,
autoStart: tunnelConnection.autoStart,
isPinned: host.pin,
useSocks5: host.useSocks5,
socks5Host: host.socks5Host,
socks5Port: host.socks5Port,
socks5Username: host.socks5Username,
socks5Password: host.socks5Password,
};
autoStartTunnels.push(tunnelConfig);

View File

@@ -3,28 +3,87 @@ import type { Client } from "ssh2";
export function execCommand(
client: Client,
command: string,
timeoutMs = 30000,
): Promise<{
stdout: string;
stderr: string;
code: number | null;
}> {
return new Promise((resolve, reject) => {
client.exec(command, { pty: false }, (err, stream) => {
if (err) return reject(err);
let settled = false;
let stream: any = null;
const timeout = setTimeout(() => {
if (!settled) {
settled = true;
cleanup();
reject(new Error(`Command timeout after ${timeoutMs}ms: ${command}`));
}
}, timeoutMs);
const cleanup = () => {
clearTimeout(timeout);
if (stream) {
try {
stream.removeAllListeners();
if (stream.stderr) {
stream.stderr.removeAllListeners();
}
stream.destroy();
} catch (error) {
// Ignore cleanup errors
}
}
};
client.exec(command, { pty: false }, (err, _stream) => {
if (err) {
if (!settled) {
settled = true;
cleanup();
reject(err);
}
return;
}
stream = _stream;
let stdout = "";
let stderr = "";
let exitCode: number | null = null;
stream
.on("close", (code: number | undefined) => {
exitCode = typeof code === "number" ? code : null;
resolve({ stdout, stderr, code: exitCode });
if (!settled) {
settled = true;
exitCode = typeof code === "number" ? code : null;
cleanup();
resolve({ stdout, stderr, code: exitCode });
}
})
.on("data", (data: Buffer) => {
stdout += data.toString("utf8");
})
.stderr.on("data", (data: Buffer) => {
stderr += data.toString("utf8");
.on("error", (streamErr: Error) => {
if (!settled) {
settled = true;
cleanup();
reject(streamErr);
}
});
if (stream.stderr) {
stream.stderr
.on("data", (data: Buffer) => {
stderr += data.toString("utf8");
})
.on("error", (stderrErr: Error) => {
if (!settled) {
settled = true;
cleanup();
reject(stderrErr);
}
});
}
});
});
}

View File

@@ -26,12 +26,20 @@ export async function collectCpuMetrics(client: Client): Promise<{
let loadTriplet: [number, number, number] | null = null;
try {
const [stat1, loadAvgOut, coresOut] = await Promise.all([
execCommand(client, "cat /proc/stat"),
execCommand(client, "cat /proc/loadavg"),
execCommand(
client,
"nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo",
const [stat1, loadAvgOut, coresOut] = await Promise.race([
Promise.all([
execCommand(client, "cat /proc/stat"),
execCommand(client, "cat /proc/loadavg"),
execCommand(
client,
"nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo",
),
]),
new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error("CPU metrics collection timeout")),
25000,
),
),
]);

View File

@@ -1,5 +1,6 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export interface LoginRecord {
user: string;
@@ -46,10 +47,20 @@ export async function collectLoginStats(client: Client): Promise<LoginStats> {
const timeStr = parts.slice(timeStart, timeStart + 5).join(" ");
if (user && user !== "wtmp" && tty !== "system") {
let parsedTime: string;
try {
const date = new Date(timeStr);
parsedTime = isNaN(date.getTime())
? new Date().toISOString()
: date.toISOString();
} catch (e) {
parsedTime = new Date().toISOString();
}
recentLogins.push({
user,
ip,
time: new Date(timeStr).toISOString(),
time: parsedTime,
status: "success",
});
if (ip !== "local") {
@@ -59,9 +70,7 @@ export async function collectLoginStats(client: Client): Promise<LoginStats> {
}
}
}
} catch (e) {
// Ignore errors
}
} catch (e) {}
try {
const failedOut = await execCommand(
@@ -96,12 +105,20 @@ export async function collectLoginStats(client: Client): Promise<LoginStats> {
}
if (user && ip) {
let parsedTime: string;
try {
const date = timeStr ? new Date(timeStr) : new Date();
parsedTime = isNaN(date.getTime())
? new Date().toISOString()
: date.toISOString();
} catch (e) {
parsedTime = new Date().toISOString();
}
failedLogins.push({
user,
ip,
time: timeStr
? new Date(timeStr).toISOString()
: new Date().toISOString(),
time: parsedTime,
status: "failed",
});
if (ip !== "unknown") {
@@ -109,9 +126,7 @@ export async function collectLoginStats(client: Client): Promise<LoginStats> {
}
}
}
} catch (e) {
// Ignore errors
}
} catch (e) {}
return {
recentLogins: recentLogins.slice(0, 10),

View File

@@ -68,12 +68,7 @@ export async function collectNetworkMetrics(client: Client): Promise<{
txBytes: null,
});
}
} catch (e) {
statsLogger.debug("Failed to collect network interface stats", {
operation: "network_stats_failed",
error: e instanceof Error ? e.message : String(e),
});
}
} catch (e) {}
return { interfaces };
}

View File

@@ -33,11 +33,13 @@ export async function collectProcessesMetrics(client: Client): Promise<{
for (let i = 1; i < Math.min(psLines.length, 11); i++) {
const parts = psLines[i].split(/\s+/);
if (parts.length >= 11) {
const cpuVal = Number(parts[2]);
const memVal = Number(parts[3]);
topProcesses.push({
pid: parts[1],
user: parts[0],
cpu: parts[2],
mem: parts[3],
cpu: Number.isFinite(cpuVal) ? cpuVal.toString() : "0",
mem: Number.isFinite(memVal) ? memVal.toString() : "0",
command: parts.slice(10).join(" ").substring(0, 50),
});
}
@@ -46,14 +48,13 @@ export async function collectProcessesMetrics(client: Client): Promise<{
const procCount = await execCommand(client, "ps aux | wc -l");
const runningCount = await execCommand(client, "ps aux | grep -c ' R '");
totalProcesses = Number(procCount.stdout.trim()) - 1;
runningProcesses = Number(runningCount.stdout.trim());
} catch (e) {
statsLogger.debug("Failed to collect process stats", {
operation: "process_stats_failed",
error: e instanceof Error ? e.message : String(e),
});
}
const totalCount = Number(procCount.stdout.trim()) - 1;
totalProcesses = Number.isFinite(totalCount) ? totalCount : null;
const runningCount2 = Number(runningCount.stdout.trim());
runningProcesses = Number.isFinite(runningCount2) ? runningCount2 : null;
} catch (e) {}
return {
total: totalProcesses,

View File

@@ -23,10 +23,7 @@ export async function collectSystemMetrics(client: Client): Promise<{
kernel = kernelOut.stdout.trim() || null;
os = osOut.stdout.trim() || null;
} catch (e) {
statsLogger.debug("Failed to collect system info", {
operation: "system_info_failed",
error: e instanceof Error ? e.message : String(e),
});
// No error log
}
return {

View File

@@ -21,12 +21,7 @@ export async function collectUptimeMetrics(client: Client): Promise<{
uptimeFormatted = `${days}d ${hours}h ${minutes}m`;
}
}
} catch (e) {
statsLogger.debug("Failed to collect uptime", {
operation: "uptime_failed",
error: e instanceof Error ? e.message : String(e),
});
}
} catch (e) {}
return {
seconds: uptimeSeconds,

View File

@@ -102,6 +102,8 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
await import("./ssh/tunnel.js");
await import("./ssh/file-manager.js");
await import("./ssh/server-stats.js");
await import("./ssh/docker.js");
await import("./ssh/docker-console.js");
await import("./dashboard.js");
process.on("SIGINT", () => {

View File

@@ -154,9 +154,8 @@ class AuthManager {
return;
}
const { getSqlite, saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { getSqlite, saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
const sqlite = getSqlite();
@@ -169,6 +168,23 @@ class AuthManager {
if (migrationResult.migrated) {
await saveMemoryDatabaseToFile();
}
try {
const { CredentialSystemEncryptionMigration } =
await import("./credential-system-encryption-migration.js");
const credMigration = new CredentialSystemEncryptionMigration();
const credResult = await credMigration.migrateUserCredentials(userId);
if (credResult.migrated > 0) {
await saveMemoryDatabaseToFile();
}
} catch (error) {
databaseLogger.warn("Credential migration failed during login", {
operation: "login_credential_migration_failed",
userId,
error: error instanceof Error ? error.message : "Unknown error",
});
}
} catch (error) {
databaseLogger.error("Lazy encryption migration failed", error, {
operation: "lazy_encryption_migration_error",
@@ -231,9 +247,8 @@ class AuthManager {
});
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
@@ -334,9 +349,8 @@ class AuthManager {
await db.delete(sessions).where(eq(sessions.id, sessionId));
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
@@ -387,9 +401,8 @@ class AuthManager {
}
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
@@ -430,9 +443,8 @@ class AuthManager {
.where(sql`${sessions.expiresAt} < datetime('now')`);
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
@@ -568,9 +580,8 @@ class AuthManager {
.where(eq(sessions.id, payload.sessionId))
.then(async () => {
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
const remainingSessions = await db
@@ -714,9 +725,8 @@ class AuthManager {
await db.delete(sessions).where(eq(sessions.id, sessionId));
try {
const { saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(

View File

@@ -0,0 +1,131 @@
import { db } from "../database/db/index.js";
import { sshCredentials } from "../database/db/schema.js";
import { eq, and, or, isNull } from "drizzle-orm";
import { DataCrypto } from "./data-crypto.js";
import { SystemCrypto } from "./system-crypto.js";
import { FieldCrypto } from "./field-crypto.js";
import { databaseLogger } from "./logger.js";
export class CredentialSystemEncryptionMigration {
async migrateUserCredentials(userId: string): Promise<{
migrated: number;
failed: number;
skipped: number;
}> {
try {
const userDEK = DataCrypto.getUserDataKey(userId);
if (!userDEK) {
throw new Error("User must be logged in to migrate credentials");
}
const systemCrypto = SystemCrypto.getInstance();
const CSKEK = await systemCrypto.getCredentialSharingKey();
const credentials = await db
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.userId, userId),
or(
isNull(sshCredentials.systemPassword),
isNull(sshCredentials.systemKey),
isNull(sshCredentials.systemKeyPassword),
),
),
);
let migrated = 0;
let failed = 0;
const skipped = 0;
for (const cred of credentials) {
try {
const plainPassword = cred.password
? FieldCrypto.decryptField(
cred.password,
userDEK,
cred.id.toString(),
"password",
)
: null;
const plainKey = cred.key
? FieldCrypto.decryptField(
cred.key,
userDEK,
cred.id.toString(),
"key",
)
: null;
const plainKeyPassword = cred.key_password
? FieldCrypto.decryptField(
cred.key_password,
userDEK,
cred.id.toString(),
"key_password",
)
: null;
const systemPassword = plainPassword
? FieldCrypto.encryptField(
plainPassword,
CSKEK,
cred.id.toString(),
"password",
)
: null;
const systemKey = plainKey
? FieldCrypto.encryptField(
plainKey,
CSKEK,
cred.id.toString(),
"key",
)
: null;
const systemKeyPassword = plainKeyPassword
? FieldCrypto.encryptField(
plainKeyPassword,
CSKEK,
cred.id.toString(),
"key_password",
)
: null;
await db
.update(sshCredentials)
.set({
systemPassword,
systemKey,
systemKeyPassword,
updatedAt: new Date().toISOString(),
})
.where(eq(sshCredentials.id, cred.id));
migrated++;
} catch (error) {
databaseLogger.error("Failed to migrate credential", error, {
credentialId: cred.id,
userId,
});
failed++;
}
}
return { migrated, failed, skipped };
} catch (error) {
databaseLogger.error(
"Credential system encryption migration failed",
error,
{
operation: "credential_migration_failed",
userId,
error: error instanceof Error ? error.message : "Unknown error",
},
);
throw error;
}
}
}

View File

@@ -475,6 +475,52 @@ class DataCrypto {
return false;
}
}
/**
* Encrypt sensitive credential fields with system key for offline sharing
* Returns an object with systemPassword, systemKey, systemKeyPassword fields
*/
static async encryptRecordWithSystemKey<T extends Record<string, unknown>>(
tableName: string,
record: T,
systemKey: Buffer,
): Promise<Partial<T>> {
const systemEncrypted: Record<string, unknown> = {};
const recordId = record.id || "temp-" + Date.now();
if (tableName !== "ssh_credentials") {
return systemEncrypted as Partial<T>;
}
if (record.password && typeof record.password === "string") {
systemEncrypted.systemPassword = FieldCrypto.encryptField(
record.password as string,
systemKey,
recordId as string,
"password",
);
}
if (record.key && typeof record.key === "string") {
systemEncrypted.systemKey = FieldCrypto.encryptField(
record.key as string,
systemKey,
recordId as string,
"key",
);
}
if (record.key_password && typeof record.key_password === "string") {
systemEncrypted.systemKeyPassword = FieldCrypto.encryptField(
record.key_password as string,
systemKey,
recordId as string,
"key_password",
);
}
return systemEncrypted as Partial<T>;
}
}
export { DataCrypto };

View File

@@ -327,11 +327,7 @@ class DatabaseFileEncryption {
fs.accessSync(envPath, fs.constants.R_OK);
envFileReadable = true;
}
} catch (error) {
databaseLogger.debug("Operation failed, continuing", {
error: error instanceof Error ? error.message : String(error),
});
}
} catch (error) {}
databaseLogger.error(
"Database decryption authentication failed - possible causes: wrong DATABASE_KEY, corrupted files, or interrupted write",

View File

@@ -36,7 +36,7 @@ const SENSITIVE_FIELDS = [
const TRUNCATE_FIELDS = ["data", "content", "body", "response", "request"];
class Logger {
export class Logger {
private serviceName: string;
private serviceIcon: string;
private serviceColor: string;

View File

@@ -0,0 +1,436 @@
import type { Request, Response, NextFunction } from "express";
import { db } from "../database/db/index.js";
import {
hostAccess,
roles,
userRoles,
sshData,
users,
} from "../database/db/schema.js";
import { eq, and, or, isNull, gte, sql } from "drizzle-orm";
import { databaseLogger } from "./logger.js";
interface AuthenticatedRequest extends Request {
userId?: string;
dataKey?: Buffer;
}
interface HostAccessInfo {
hasAccess: boolean;
isOwner: boolean;
isShared: boolean;
permissionLevel?: "view";
expiresAt?: string | null;
}
interface PermissionCheckResult {
allowed: boolean;
reason?: string;
}
class PermissionManager {
private static instance: PermissionManager;
private permissionCache: Map<
string,
{ permissions: string[]; timestamp: number }
>;
private readonly CACHE_TTL = 5 * 60 * 1000;
private constructor() {
this.permissionCache = new Map();
setInterval(() => {
this.cleanupExpiredAccess().catch((error) => {
databaseLogger.error(
"Failed to run periodic host access cleanup",
error,
{
operation: "host_access_cleanup_periodic",
},
);
});
}, 60 * 1000);
setInterval(() => {
this.clearPermissionCache();
}, this.CACHE_TTL);
}
static getInstance(): PermissionManager {
if (!this.instance) {
this.instance = new PermissionManager();
}
return this.instance;
}
/**
* Clean up expired host access entries
*/
private async cleanupExpiredAccess(): Promise<void> {
try {
const now = new Date().toISOString();
const result = await db
.delete(hostAccess)
.where(
and(
sql`${hostAccess.expiresAt} IS NOT NULL`,
sql`${hostAccess.expiresAt} <= ${now}`,
),
)
.returning({ id: hostAccess.id });
} catch (error) {
databaseLogger.error("Failed to cleanup expired host access", error, {
operation: "host_access_cleanup_failed",
});
}
}
/**
* Clear permission cache
*/
private clearPermissionCache(): void {
this.permissionCache.clear();
}
/**
* Invalidate permission cache for a specific user
*/
invalidateUserPermissionCache(userId: string): void {
this.permissionCache.delete(userId);
}
/**
* Get user permissions from roles
*/
async getUserPermissions(userId: string): Promise<string[]> {
const cached = this.permissionCache.get(userId);
if (cached && Date.now() - cached.timestamp < this.CACHE_TTL) {
return cached.permissions;
}
try {
const userRoleRecords = await db
.select({
permissions: roles.permissions,
})
.from(userRoles)
.innerJoin(roles, eq(userRoles.roleId, roles.id))
.where(eq(userRoles.userId, userId));
const allPermissions = new Set<string>();
for (const record of userRoleRecords) {
try {
const permissions = JSON.parse(record.permissions) as string[];
for (const perm of permissions) {
allPermissions.add(perm);
}
} catch (parseError) {
databaseLogger.warn("Failed to parse role permissions", {
operation: "get_user_permissions",
userId,
error: parseError,
});
}
}
const permissionsArray = Array.from(allPermissions);
this.permissionCache.set(userId, {
permissions: permissionsArray,
timestamp: Date.now(),
});
return permissionsArray;
} catch (error) {
databaseLogger.error("Failed to get user permissions", error, {
operation: "get_user_permissions",
userId,
});
return [];
}
}
/**
* Check if user has a specific permission
* Supports wildcards: "hosts.*", "*"
*/
async hasPermission(userId: string, permission: string): Promise<boolean> {
const userPermissions = await this.getUserPermissions(userId);
if (userPermissions.includes("*")) {
return true;
}
if (userPermissions.includes(permission)) {
return true;
}
const parts = permission.split(".");
for (let i = parts.length; i > 0; i--) {
const wildcardPermission = parts.slice(0, i).join(".") + ".*";
if (userPermissions.includes(wildcardPermission)) {
return true;
}
}
return false;
}
/**
* Check if user can access a specific host
*/
async canAccessHost(
userId: string,
hostId: number,
action: "read" | "write" | "execute" | "delete" | "share" = "read",
): Promise<HostAccessInfo> {
try {
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length > 0) {
return {
hasAccess: true,
isOwner: true,
isShared: false,
};
}
const userRoleIds = await db
.select({ roleId: userRoles.roleId })
.from(userRoles)
.where(eq(userRoles.userId, userId));
const roleIds = userRoleIds.map((r) => r.roleId);
const now = new Date().toISOString();
const sharedAccess = await db
.select()
.from(hostAccess)
.where(
and(
eq(hostAccess.hostId, hostId),
or(
eq(hostAccess.userId, userId),
roleIds.length > 0
? sql`${hostAccess.roleId} IN (${sql.join(
roleIds.map((id) => sql`${id}`),
sql`, `,
)})`
: sql`false`,
),
or(isNull(hostAccess.expiresAt), gte(hostAccess.expiresAt, now)),
),
)
.limit(1);
if (sharedAccess.length > 0) {
const access = sharedAccess[0];
if (action === "write" || action === "delete") {
return {
hasAccess: false,
isOwner: false,
isShared: true,
permissionLevel: access.permissionLevel as "view",
expiresAt: access.expiresAt,
};
}
try {
await db
.update(hostAccess)
.set({
lastAccessedAt: now,
})
.where(eq(hostAccess.id, access.id));
} catch (error) {
databaseLogger.warn("Failed to update host access timestamp", {
operation: "update_host_access_timestamp",
error,
});
}
return {
hasAccess: true,
isOwner: false,
isShared: true,
permissionLevel: access.permissionLevel as "view",
expiresAt: access.expiresAt,
};
}
return {
hasAccess: false,
isOwner: false,
isShared: false,
};
} catch (error) {
databaseLogger.error("Failed to check host access", error, {
operation: "can_access_host",
userId,
hostId,
action,
});
return {
hasAccess: false,
isOwner: false,
isShared: false,
};
}
}
/**
* Check if user is admin (backward compatibility)
*/
async isAdmin(userId: string): Promise<boolean> {
try {
const user = await db
.select({ isAdmin: users.is_admin })
.from(users)
.where(eq(users.id, userId))
.limit(1);
if (user.length > 0 && user[0].isAdmin) {
return true;
}
const adminRoles = await db
.select({ roleName: roles.name })
.from(userRoles)
.innerJoin(roles, eq(userRoles.roleId, roles.id))
.where(
and(
eq(userRoles.userId, userId),
or(eq(roles.name, "admin"), eq(roles.name, "super_admin")),
),
);
return adminRoles.length > 0;
} catch (error) {
databaseLogger.error("Failed to check admin status", error, {
operation: "is_admin",
userId,
});
return false;
}
}
/**
* Middleware: Require specific permission
*/
requirePermission(permission: string) {
return async (
req: AuthenticatedRequest,
res: Response,
next: NextFunction,
) => {
const userId = req.userId;
if (!userId) {
return res.status(401).json({ error: "Not authenticated" });
}
const hasPermission = await this.hasPermission(userId, permission);
if (!hasPermission) {
databaseLogger.warn("Permission denied", {
operation: "permission_check",
userId,
permission,
path: req.path,
});
return res.status(403).json({
error: "Insufficient permissions",
required: permission,
});
}
next();
};
}
/**
* Middleware: Require host access
*/
requireHostAccess(
hostIdParam: string = "id",
action: "read" | "write" | "execute" | "delete" | "share" = "read",
) {
return async (
req: AuthenticatedRequest,
res: Response,
next: NextFunction,
) => {
const userId = req.userId;
if (!userId) {
return res.status(401).json({ error: "Not authenticated" });
}
const hostId = parseInt(req.params[hostIdParam], 10);
if (isNaN(hostId)) {
return res.status(400).json({ error: "Invalid host ID" });
}
const accessInfo = await this.canAccessHost(userId, hostId, action);
if (!accessInfo.hasAccess) {
databaseLogger.warn("Host access denied", {
operation: "host_access_check",
userId,
hostId,
action,
});
return res.status(403).json({
error: "Access denied to host",
hostId,
action,
});
}
(req as any).hostAccessInfo = accessInfo;
next();
};
}
/**
* Middleware: Require admin role (backward compatible)
*/
requireAdmin() {
return async (
req: AuthenticatedRequest,
res: Response,
next: NextFunction,
) => {
const userId = req.userId;
if (!userId) {
return res.status(401).json({ error: "Not authenticated" });
}
const isAdmin = await this.isAdmin(userId);
if (!isAdmin) {
databaseLogger.warn("Admin access denied", {
operation: "admin_check",
userId,
path: req.path,
});
return res.status(403).json({ error: "Admin access required" });
}
next();
};
}
}
export { PermissionManager };
export type { AuthenticatedRequest, HostAccessInfo, PermissionCheckResult };

View File

@@ -0,0 +1,700 @@
import { db } from "../database/db/index.js";
import {
sharedCredentials,
sshCredentials,
hostAccess,
users,
userRoles,
sshData,
} from "../database/db/schema.js";
import { eq, and } from "drizzle-orm";
import { DataCrypto } from "./data-crypto.js";
import { FieldCrypto } from "./field-crypto.js";
import { databaseLogger } from "./logger.js";
interface CredentialData {
username: string;
authType: string;
password?: string;
key?: string;
keyPassword?: string;
keyType?: string;
}
/**
* Manages shared credentials for RBAC host sharing.
* Creates per-user encrypted credential copies to enable credential sharing
* without requiring the credential owner to be online.
*/
class SharedCredentialManager {
private static instance: SharedCredentialManager;
private constructor() {}
static getInstance(): SharedCredentialManager {
if (!this.instance) {
this.instance = new SharedCredentialManager();
}
return this.instance;
}
/**
* Create shared credential for a specific user
* Called when sharing a host with a user
*/
async createSharedCredentialForUser(
hostAccessId: number,
originalCredentialId: number,
targetUserId: string,
ownerId: string,
): Promise<void> {
try {
const ownerDEK = DataCrypto.getUserDataKey(ownerId);
if (ownerDEK) {
const targetDEK = DataCrypto.getUserDataKey(targetUserId);
if (!targetDEK) {
await this.createPendingSharedCredential(
hostAccessId,
originalCredentialId,
targetUserId,
);
return;
}
const credentialData = await this.getDecryptedCredential(
originalCredentialId,
ownerId,
ownerDEK,
);
const encryptedForTarget = this.encryptCredentialForUser(
credentialData,
targetUserId,
targetDEK,
hostAccessId,
);
await db.insert(sharedCredentials).values({
hostAccessId,
originalCredentialId,
targetUserId,
...encryptedForTarget,
needsReEncryption: false,
});
} else {
const targetDEK = DataCrypto.getUserDataKey(targetUserId);
if (!targetDEK) {
await this.createPendingSharedCredential(
hostAccessId,
originalCredentialId,
targetUserId,
);
return;
}
const credentialData =
await this.getDecryptedCredentialViaSystemKey(originalCredentialId);
const encryptedForTarget = this.encryptCredentialForUser(
credentialData,
targetUserId,
targetDEK,
hostAccessId,
);
await db.insert(sharedCredentials).values({
hostAccessId,
originalCredentialId,
targetUserId,
...encryptedForTarget,
needsReEncryption: false,
});
}
} catch (error) {
databaseLogger.error("Failed to create shared credential", error, {
operation: "create_shared_credential",
hostAccessId,
targetUserId,
});
throw error;
}
}
/**
* Create shared credentials for all users in a role
* Called when sharing a host with a role
*/
async createSharedCredentialsForRole(
hostAccessId: number,
originalCredentialId: number,
roleId: number,
ownerId: string,
): Promise<void> {
try {
const roleUsers = await db
.select({ userId: userRoles.userId })
.from(userRoles)
.where(eq(userRoles.roleId, roleId));
for (const { userId } of roleUsers) {
try {
await this.createSharedCredentialForUser(
hostAccessId,
originalCredentialId,
userId,
ownerId,
);
} catch (error) {
databaseLogger.error(
"Failed to create shared credential for role member",
error,
{
operation: "create_shared_credentials_role",
hostAccessId,
roleId,
userId,
},
);
}
}
} catch (error) {
databaseLogger.error(
"Failed to create shared credentials for role",
error,
{
operation: "create_shared_credentials_role",
hostAccessId,
roleId,
},
);
throw error;
}
}
/**
* Get credential data for a shared user
* Called when a shared user connects to a host
*/
async getSharedCredentialForUser(
hostId: number,
userId: string,
): Promise<CredentialData | null> {
try {
const userDEK = DataCrypto.getUserDataKey(userId);
if (!userDEK) {
throw new Error(`User ${userId} data not unlocked`);
}
const sharedCred = await db
.select()
.from(sharedCredentials)
.innerJoin(
hostAccess,
eq(sharedCredentials.hostAccessId, hostAccess.id),
)
.where(
and(
eq(hostAccess.hostId, hostId),
eq(sharedCredentials.targetUserId, userId),
),
)
.limit(1);
if (sharedCred.length === 0) {
return null;
}
const cred = sharedCred[0].shared_credentials;
if (cred.needsReEncryption) {
databaseLogger.warn(
"Shared credential needs re-encryption but cannot be accessed yet",
{
operation: "get_shared_credential_pending",
hostId,
userId,
},
);
return null;
}
return this.decryptSharedCredential(cred, userDEK);
} catch (error) {
databaseLogger.error("Failed to get shared credential", error, {
operation: "get_shared_credential",
hostId,
userId,
});
throw error;
}
}
/**
* Update all shared credentials when original credential is updated
* Called when credential owner updates credential
*/
async updateSharedCredentialsForOriginal(
credentialId: number,
ownerId: string,
): Promise<void> {
try {
const sharedCreds = await db
.select()
.from(sharedCredentials)
.where(eq(sharedCredentials.originalCredentialId, credentialId));
const ownerDEK = DataCrypto.getUserDataKey(ownerId);
let credentialData: CredentialData;
if (ownerDEK) {
credentialData = await this.getDecryptedCredential(
credentialId,
ownerId,
ownerDEK,
);
} else {
try {
credentialData =
await this.getDecryptedCredentialViaSystemKey(credentialId);
} catch (error) {
databaseLogger.warn(
"Cannot update shared credentials: owner offline and credential not migrated",
{
operation: "update_shared_credentials_failed",
credentialId,
ownerId,
error: error instanceof Error ? error.message : "Unknown error",
},
);
await db
.update(sharedCredentials)
.set({ needsReEncryption: true })
.where(eq(sharedCredentials.originalCredentialId, credentialId));
return;
}
}
for (const sharedCred of sharedCreds) {
const targetDEK = DataCrypto.getUserDataKey(sharedCred.targetUserId);
if (!targetDEK) {
await db
.update(sharedCredentials)
.set({ needsReEncryption: true })
.where(eq(sharedCredentials.id, sharedCred.id));
continue;
}
const encryptedForTarget = this.encryptCredentialForUser(
credentialData,
sharedCred.targetUserId,
targetDEK,
sharedCred.hostAccessId,
);
await db
.update(sharedCredentials)
.set({
...encryptedForTarget,
needsReEncryption: false,
updatedAt: new Date().toISOString(),
})
.where(eq(sharedCredentials.id, sharedCred.id));
}
} catch (error) {
databaseLogger.error("Failed to update shared credentials", error, {
operation: "update_shared_credentials",
credentialId,
});
}
}
/**
* Delete shared credentials when original credential is deleted
* Called from credential deletion route
*/
async deleteSharedCredentialsForOriginal(
credentialId: number,
): Promise<void> {
try {
const result = await db
.delete(sharedCredentials)
.where(eq(sharedCredentials.originalCredentialId, credentialId))
.returning({ id: sharedCredentials.id });
} catch (error) {
databaseLogger.error("Failed to delete shared credentials", error, {
operation: "delete_shared_credentials",
credentialId,
});
}
}
/**
* Re-encrypt pending shared credentials for a user when they log in
* Called during user login
*/
async reEncryptPendingCredentialsForUser(userId: string): Promise<void> {
try {
const userDEK = DataCrypto.getUserDataKey(userId);
if (!userDEK) {
return;
}
const pendingCreds = await db
.select()
.from(sharedCredentials)
.where(
and(
eq(sharedCredentials.targetUserId, userId),
eq(sharedCredentials.needsReEncryption, true),
),
);
for (const cred of pendingCreds) {
await this.reEncryptSharedCredential(cred.id, userId);
}
} catch (error) {
databaseLogger.error("Failed to re-encrypt pending credentials", error, {
operation: "reencrypt_pending_credentials",
userId,
});
}
}
private async getDecryptedCredential(
credentialId: number,
ownerId: string,
ownerDEK: Buffer,
): Promise<CredentialData> {
const creds = await db
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, credentialId),
eq(sshCredentials.userId, ownerId),
),
)
.limit(1);
if (creds.length === 0) {
throw new Error(`Credential ${credentialId} not found`);
}
const cred = creds[0];
return {
username: cred.username,
authType: cred.authType,
password: cred.password
? this.decryptField(cred.password, ownerDEK, credentialId, "password")
: undefined,
key: cred.key
? this.decryptField(cred.key, ownerDEK, credentialId, "key")
: undefined,
keyPassword: cred.key_password
? this.decryptField(
cred.key_password,
ownerDEK,
credentialId,
"key_password",
)
: undefined,
keyType: cred.keyType,
};
}
/**
* Decrypt credential using system key (for offline sharing when owner is offline)
*/
private async getDecryptedCredentialViaSystemKey(
credentialId: number,
): Promise<CredentialData> {
const creds = await db
.select()
.from(sshCredentials)
.where(eq(sshCredentials.id, credentialId))
.limit(1);
if (creds.length === 0) {
throw new Error(`Credential ${credentialId} not found`);
}
const cred = creds[0];
if (!cred.systemPassword && !cred.systemKey && !cred.systemKeyPassword) {
throw new Error(
"Credential not yet migrated for offline sharing. " +
"Please ask credential owner to log in to enable sharing.",
);
}
const { SystemCrypto } = await import("./system-crypto.js");
const systemCrypto = SystemCrypto.getInstance();
const CSKEK = await systemCrypto.getCredentialSharingKey();
return {
username: cred.username,
authType: cred.authType,
password: cred.systemPassword
? this.decryptField(
cred.systemPassword,
CSKEK,
credentialId,
"password",
)
: undefined,
key: cred.systemKey
? this.decryptField(cred.systemKey, CSKEK, credentialId, "key")
: undefined,
keyPassword: cred.systemKeyPassword
? this.decryptField(
cred.systemKeyPassword,
CSKEK,
credentialId,
"key_password",
)
: undefined,
keyType: cred.keyType,
};
}
private encryptCredentialForUser(
credentialData: CredentialData,
targetUserId: string,
targetDEK: Buffer,
hostAccessId: number,
): {
encryptedUsername: string;
encryptedAuthType: string;
encryptedPassword: string | null;
encryptedKey: string | null;
encryptedKeyPassword: string | null;
encryptedKeyType: string | null;
} {
const recordId = `shared-${hostAccessId}-${targetUserId}`;
return {
encryptedUsername: FieldCrypto.encryptField(
credentialData.username,
targetDEK,
recordId,
"username",
),
encryptedAuthType: credentialData.authType,
encryptedPassword: credentialData.password
? FieldCrypto.encryptField(
credentialData.password,
targetDEK,
recordId,
"password",
)
: null,
encryptedKey: credentialData.key
? FieldCrypto.encryptField(
credentialData.key,
targetDEK,
recordId,
"key",
)
: null,
encryptedKeyPassword: credentialData.keyPassword
? FieldCrypto.encryptField(
credentialData.keyPassword,
targetDEK,
recordId,
"key_password",
)
: null,
encryptedKeyType: credentialData.keyType || null,
};
}
private decryptSharedCredential(
sharedCred: typeof sharedCredentials.$inferSelect,
userDEK: Buffer,
): CredentialData {
const recordId = `shared-${sharedCred.hostAccessId}-${sharedCred.targetUserId}`;
return {
username: FieldCrypto.decryptField(
sharedCred.encryptedUsername,
userDEK,
recordId,
"username",
),
authType: sharedCred.encryptedAuthType,
password: sharedCred.encryptedPassword
? FieldCrypto.decryptField(
sharedCred.encryptedPassword,
userDEK,
recordId,
"password",
)
: undefined,
key: sharedCred.encryptedKey
? FieldCrypto.decryptField(
sharedCred.encryptedKey,
userDEK,
recordId,
"key",
)
: undefined,
keyPassword: sharedCred.encryptedKeyPassword
? FieldCrypto.decryptField(
sharedCred.encryptedKeyPassword,
userDEK,
recordId,
"key_password",
)
: undefined,
keyType: sharedCred.encryptedKeyType || undefined,
};
}
private decryptField(
encryptedValue: string,
dek: Buffer,
recordId: number | string,
fieldName: string,
): string {
try {
return FieldCrypto.decryptField(
encryptedValue,
dek,
recordId.toString(),
fieldName,
);
} catch (error) {
databaseLogger.warn("Field decryption failed, returning as-is", {
operation: "decrypt_field",
fieldName,
recordId,
});
return encryptedValue;
}
}
private async createPendingSharedCredential(
hostAccessId: number,
originalCredentialId: number,
targetUserId: string,
): Promise<void> {
await db.insert(sharedCredentials).values({
hostAccessId,
originalCredentialId,
targetUserId,
encryptedUsername: "",
encryptedAuthType: "",
needsReEncryption: true,
});
databaseLogger.info("Created pending shared credential", {
operation: "create_pending_shared_credential",
hostAccessId,
targetUserId,
});
}
private async reEncryptSharedCredential(
sharedCredId: number,
userId: string,
): Promise<void> {
try {
const sharedCred = await db
.select()
.from(sharedCredentials)
.where(eq(sharedCredentials.id, sharedCredId))
.limit(1);
if (sharedCred.length === 0) {
databaseLogger.warn("Re-encrypt: shared credential not found", {
operation: "reencrypt_not_found",
sharedCredId,
});
return;
}
const cred = sharedCred[0];
const access = await db
.select()
.from(hostAccess)
.innerJoin(sshData, eq(hostAccess.hostId, sshData.id))
.where(eq(hostAccess.id, cred.hostAccessId))
.limit(1);
if (access.length === 0) {
databaseLogger.warn("Re-encrypt: host access not found", {
operation: "reencrypt_access_not_found",
sharedCredId,
});
return;
}
const ownerId = access[0].ssh_data.userId;
const userDEK = DataCrypto.getUserDataKey(userId);
if (!userDEK) {
databaseLogger.warn("Re-encrypt: user DEK not available", {
operation: "reencrypt_user_offline",
sharedCredId,
userId,
});
return;
}
const ownerDEK = DataCrypto.getUserDataKey(ownerId);
let credentialData: CredentialData;
if (ownerDEK) {
credentialData = await this.getDecryptedCredential(
cred.originalCredentialId,
ownerId,
ownerDEK,
);
} else {
try {
credentialData = await this.getDecryptedCredentialViaSystemKey(
cred.originalCredentialId,
);
} catch (error) {
databaseLogger.warn(
"Re-encrypt: system key decryption failed, credential may not be migrated yet",
{
operation: "reencrypt_system_key_failed",
sharedCredId,
error: error instanceof Error ? error.message : "Unknown error",
},
);
return;
}
}
const encryptedForTarget = this.encryptCredentialForUser(
credentialData,
userId,
userDEK,
cred.hostAccessId,
);
await db
.update(sharedCredentials)
.set({
...encryptedForTarget,
needsReEncryption: false,
updatedAt: new Date().toISOString(),
})
.where(eq(sharedCredentials.id, sharedCredId));
} catch (error) {
databaseLogger.error("Failed to re-encrypt shared credential", error, {
operation: "reencrypt_shared_credential",
sharedCredId,
userId,
});
}
}
}
export { SharedCredentialManager };

View File

@@ -2,7 +2,12 @@ import { getDb, DatabaseSaveTrigger } from "../database/db/index.js";
import { DataCrypto } from "./data-crypto.js";
import type { SQLiteTable } from "drizzle-orm/sqlite-core";
type TableName = "users" | "ssh_data" | "ssh_credentials" | "recent_activity";
type TableName =
| "users"
| "ssh_data"
| "ssh_credentials"
| "recent_activity"
| "socks5_proxy_presets";
class SimpleDBOps {
static async insert<T extends Record<string, unknown>>(
@@ -23,6 +28,20 @@ class SimpleDBOps {
userDataKey,
);
if (tableName === "ssh_credentials") {
const { SystemCrypto } = await import("./system-crypto.js");
const systemCrypto = SystemCrypto.getInstance();
const systemKey = await systemCrypto.getCredentialSharingKey();
const systemEncrypted = await DataCrypto.encryptRecordWithSystemKey(
tableName,
dataWithTempId,
systemKey,
);
Object.assign(encryptedData, systemEncrypted);
}
if (!data.id) {
delete encryptedData.id;
}
@@ -105,6 +124,20 @@ class SimpleDBOps {
userDataKey,
);
if (tableName === "ssh_credentials") {
const { SystemCrypto } = await import("./system-crypto.js");
const systemCrypto = SystemCrypto.getInstance();
const systemKey = await systemCrypto.getCredentialSharingKey();
const systemEncrypted = await DataCrypto.encryptRecordWithSystemKey(
tableName,
data,
systemKey,
);
Object.assign(encryptedData, systemEncrypted);
}
const result = await getDb()
.update(table)
.set(encryptedData)

View File

@@ -0,0 +1,131 @@
import { SocksClient } from "socks";
import type { SocksClientOptions } from "socks";
import net from "net";
import { sshLogger } from "./logger.js";
import type { ProxyNode } from "../../types/index.js";
export interface SOCKS5Config {
useSocks5?: boolean;
socks5Host?: string;
socks5Port?: number;
socks5Username?: string;
socks5Password?: string;
socks5ProxyChain?: ProxyNode[];
}
/**
* Creates a SOCKS5 connection through a single proxy or a chain of proxies
* @param targetHost - Target SSH server hostname/IP
* @param targetPort - Target SSH server port
* @param socks5Config - SOCKS5 proxy configuration
* @returns Promise with connected socket or null if SOCKS5 is not enabled
*/
export async function createSocks5Connection(
targetHost: string,
targetPort: number,
socks5Config: SOCKS5Config,
): Promise<net.Socket | null> {
if (!socks5Config.useSocks5) {
return null;
}
if (
socks5Config.socks5ProxyChain &&
socks5Config.socks5ProxyChain.length > 0
) {
return createProxyChainConnection(
targetHost,
targetPort,
socks5Config.socks5ProxyChain,
);
}
if (socks5Config.socks5Host) {
return createSingleProxyConnection(targetHost, targetPort, socks5Config);
}
return null;
}
/**
* Creates a connection through a single SOCKS proxy
*/
async function createSingleProxyConnection(
targetHost: string,
targetPort: number,
socks5Config: SOCKS5Config,
): Promise<net.Socket> {
const socksOptions: SocksClientOptions = {
proxy: {
host: socks5Config.socks5Host!,
port: socks5Config.socks5Port || 1080,
type: 5,
userId: socks5Config.socks5Username,
password: socks5Config.socks5Password,
},
command: "connect",
destination: {
host: targetHost,
port: targetPort,
},
};
try {
const info = await SocksClient.createConnection(socksOptions);
return info.socket;
} catch (error) {
sshLogger.error("SOCKS5 connection failed", error, {
operation: "socks5_connect_failed",
proxyHost: socks5Config.socks5Host,
proxyPort: socks5Config.socks5Port || 1080,
targetHost,
targetPort,
errorMessage: error instanceof Error ? error.message : "Unknown error",
});
throw error;
}
}
/**
* Creates a connection through a chain of SOCKS proxies
* Each proxy in the chain connects through the previous one
*/
async function createProxyChainConnection(
targetHost: string,
targetPort: number,
proxyChain: ProxyNode[],
): Promise<net.Socket> {
if (proxyChain.length === 0) {
throw new Error("Proxy chain is empty");
}
const chainPath = proxyChain.map((p) => `${p.host}:${p.port}`).join(" → ");
try {
const info = await SocksClient.createConnectionChain({
proxies: proxyChain.map((p) => ({
host: p.host,
port: p.port,
type: p.type,
userId: p.username,
password: p.password,
timeout: 10000,
})),
command: "connect",
destination: {
host: targetHost,
port: targetPort,
},
});
return info.socket;
} catch (error) {
sshLogger.error("SOCKS proxy chain connection failed", error, {
operation: "socks5_chain_connect_failed",
chainLength: proxyChain.length,
targetHost,
targetPort,
errorMessage: error instanceof Error ? error.message : "Unknown error",
});
throw error;
}
}

View File

@@ -8,6 +8,7 @@ class SystemCrypto {
private jwtSecret: string | null = null;
private databaseKey: Buffer | null = null;
private internalAuthToken: string | null = null;
private credentialSharingKey: Buffer | null = null;
private constructor() {}
@@ -158,6 +159,48 @@ class SystemCrypto {
return this.internalAuthToken!;
}
async initializeCredentialSharingKey(): Promise<void> {
try {
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
const envKey = process.env.CREDENTIAL_SHARING_KEY;
if (envKey && envKey.length >= 64) {
this.credentialSharingKey = Buffer.from(envKey, "hex");
return;
}
try {
const envContent = await fs.readFile(envPath, "utf8");
const csKeyMatch = envContent.match(/^CREDENTIAL_SHARING_KEY=(.+)$/m);
if (csKeyMatch && csKeyMatch[1] && csKeyMatch[1].length >= 64) {
this.credentialSharingKey = Buffer.from(csKeyMatch[1], "hex");
process.env.CREDENTIAL_SHARING_KEY = csKeyMatch[1];
return;
}
} catch (fileError) {}
await this.generateAndGuideCredentialSharingKey();
} catch (error) {
databaseLogger.error(
"Failed to initialize credential sharing key",
error,
{
operation: "cred_sharing_key_init_failed",
dataDir: process.env.DATA_DIR || "./db/data",
},
);
throw new Error("Credential sharing key initialization failed");
}
}
async getCredentialSharingKey(): Promise<Buffer> {
if (!this.credentialSharingKey) {
await this.initializeCredentialSharingKey();
}
return this.credentialSharingKey!;
}
private async generateAndGuideUser(): Promise<void> {
const newSecret = crypto.randomBytes(32).toString("hex");
const instanceId = crypto.randomBytes(8).toString("hex");
@@ -210,6 +253,26 @@ class SystemCrypto {
);
}
private async generateAndGuideCredentialSharingKey(): Promise<void> {
const newKey = crypto.randomBytes(32);
const newKeyHex = newKey.toString("hex");
const instanceId = crypto.randomBytes(8).toString("hex");
this.credentialSharingKey = newKey;
await this.updateEnvFile("CREDENTIAL_SHARING_KEY", newKeyHex);
databaseLogger.success(
"Credential sharing key auto-generated and saved to .env",
{
operation: "cred_sharing_key_auto_generated",
instanceId,
envVarName: "CREDENTIAL_SHARING_KEY",
note: "Used for offline credential sharing - no restart required",
},
);
}
async validateJWTSecret(): Promise<boolean> {
try {
const secret = await this.getJWTSecret();