40 Commits

Author SHA1 Message Date
LukeGus
cde621275b fix: build error on docker 2026-01-02 03:21:45 -06:00
Luke Gustafson
d1b95d698f Change runner to blacksmith-4vcpu-ubuntu-2404 2026-01-02 02:39:35 -06:00
Luke Gustafson
f85609660a Remove NODE_OPTIONS from build commands in Dockerfile 2026-01-02 02:35:08 -06:00
Luke Gustafson
f48e645a56 Increase Node.js memory limit in Dockerfile 2026-01-02 02:20:23 -06:00
Luke Gustafson
da31164e23 Increase max old space size for npm builds 2026-01-02 02:13:08 -06:00
Luke Gustafson
f17a0c2854 fix: build error on docker (#477)
* fix: electron build errors and skip macos job

* fix: testflight submit failure

* fix: made submit job match build type

* fix: resolve Vite build warnings for mixed static/dynamic imports (#473)

* Update Crowdin configuration file

* Update Crowdin configuration file

* fix: resolve Vite build warnings for mixed static/dynamic imports

- Convert all dynamic imports of main-axios.ts to static imports (10 files)
- Convert all dynamic imports of sonner to static imports (4 files)
- Add manual chunking configuration to vite.config.ts for better bundle splitting
  - react-vendor: React and React DOM
  - ui-vendor: Radix UI, lucide-react, clsx, tailwind-merge
  - monaco: Monaco Editor
  - codemirror: CodeMirror and related packages
- Increase chunkSizeWarningLimit to 1000kB

This resolves Vite warnings about mixed import strategies preventing
proper code-splitting.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Termix CI <ci@termix.dev>
Co-authored-by: Claude <noreply@anthropic.com>

* fix: file manager incorrectly decoding/encoding when editing files (made base64/utf8 dependent)

* fix: build error on docker

---------

Co-authored-by: Jefferson Nunn <89030989+jeffersonwarrior@users.noreply.github.com>
Co-authored-by: Termix CI <ci@termix.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-02 01:52:58 -06:00
Luke Gustafson
9936ef469d fix: file manager incorrectly decoding/encoding when editing files (#476)
* fix: electron build errors and skip macos job

* fix: testflight submit failure

* fix: made submit job match build type

* fix: resolve Vite build warnings for mixed static/dynamic imports (#473)

* Update Crowdin configuration file

* Update Crowdin configuration file

* fix: resolve Vite build warnings for mixed static/dynamic imports

- Convert all dynamic imports of main-axios.ts to static imports (10 files)
- Convert all dynamic imports of sonner to static imports (4 files)
- Add manual chunking configuration to vite.config.ts for better bundle splitting
  - react-vendor: React and React DOM
  - ui-vendor: Radix UI, lucide-react, clsx, tailwind-merge
  - monaco: Monaco Editor
  - codemirror: CodeMirror and related packages
- Increase chunkSizeWarningLimit to 1000kB

This resolves Vite warnings about mixed import strategies preventing
proper code-splitting.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Termix CI <ci@termix.dev>
Co-authored-by: Claude <noreply@anthropic.com>

* fix: file manager incorrectly decoding/encoding when editing files (made base64/utf8 dependent)

---------

Co-authored-by: Jefferson Nunn <89030989+jeffersonwarrior@users.noreply.github.com>
Co-authored-by: Termix CI <ci@termix.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-02 00:33:10 -06:00
Gaylord Julien
fc87146e4b Update Linux Portable section with AUR link (#474) 2026-01-01 18:42:51 -06:00
Luke Gustafson
fceb430d22 Update Crowdin configuration file 2026-01-01 01:57:13 -06:00
Luke Gustafson
5168ded79d Update Crowdin configuration file 2026-01-01 01:52:38 -06:00
LukeGus
51e6826c95 chore: update termix rb for 1.10.0 2025-12-31 22:45:43 -06:00
Luke Gustafson
ad86c2040b v1.10.0 (#471)
* fix select edit host but not update view (#438)

* fix: Checksum issue with chocolatey

* fix: Remove homebrew old stuff

* Add Korean translation (#439)

Co-authored-by: 송준우 <2484@coreit.co.kr>

* feat: Automate flatpak

* fix: Add imagemagik to electron builder to resolve build error

* fix: Build error with runtime repo flag

* fix: Flatpak runtime error and install freedesktop ver warning

* fix: Flatpak runtime error and install freedesktop ver warning

* feat: Re-add homebrew cask and move scripts to backend

* fix: No sandbox flag issue

* fix: Change name for electron macos cask output

* fix: Sandbox error with Linux

* fix: Remove comming soon for app stores in readme

* Adding Comment at the end of the public_key on the host on deploy (#440)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* -Add New Interface for Credential DB
-Add Credential Name as a comment into the server authorized_key file

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Sudo auto fill password (#441)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Feature Sudo password auto-fill;

* Fix locale json shema;

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Added Italian Language; (#445)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Added Italian Language;

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Auto collapse snippet folders (#448)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* feat: Add collapsable snippets (customizable in user profile)

* Translations (#447)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Added Italian Language;

* Fix translations;

Removed duplicate keys, synchronised other languages using English as the source, translated added keys, fixed inaccurate translations.

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* Remove PTY-level keepalive (#449)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Remove PTY-level keepalive to prevent unwanted terminal output; use SSH-level keepalive instead

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* feat: Seperate server stats and tunnel management (improved both UI's) then started initial docker implementation

* fix: finalize adding docker to db

* feat: Add docker management support (local squash)

* Fix RBAC role system bugs and improve UX (#446)

* Fix RBAC role system bugs and improve UX

- Fix user list dropdown selection in host sharing
- Fix role sharing permissions to include role-based access
- Fix translation template interpolation for success messages
- Standardize system roles to admin and user only
- Auto-assign user role to new registrations
- Remove blocking confirmation dialogs in modal contexts
- Add missing i18n keys for common actions
- Fix button type to prevent unintended form submissions

* Enhance RBAC system with UI improvements and security fixes

- Move role assignment to Users tab with per-user role management
- Protect system roles (admin/user) from editing and manual assignment
- Simplify permission system: remove Use level, keep View and Manage
- Hide Update button and Sharing tab for view-only/shared hosts
- Prevent users from sharing hosts with themselves
- Unify table and modal styling across admin panels
- Auto-assign system roles on user registration
- Add permission metadata to host interface

* Add empty state message for role assignment

- Display helpful message when no custom roles available
- Clarify that system roles are auto-assigned
- Add noCustomRolesToAssign translation in English and Chinese

* fix: Prevent credential sharing errors for shared hosts

- Skip credential resolution for shared hosts with credential authentication
  to prevent decryption errors (credentials are encrypted per-user)
- Add warning alert in sharing tab when host uses credential authentication
- Inform users that shared users cannot connect to credential-based hosts
- Add translations for credential sharing warning (EN/ZH)

This prevents authentication failures when sharing hosts configured
with credential authentication while maintaining security by keeping
credentials isolated per user.

* feat: Improve rbac UI and fixes some bugs

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* SOCKS5 support (#452)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* SOCKS5 support

Adding single and chain socks5 proxy support

* fix: cleanup files

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* Notes and Expiry fields add (#453)

* Add termix.rb Cask file

* Update Termix to version 1.9.0 with new checksum

* Update README to remove 'coming soon' notes

* Notes and Expiry add

* fix: cleanup files

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* fix: ssh host types

* fix: sudo incorrect styling and remove expiration date

* feat: add sudo password and add diagonal bg's

* fix: snippet running on enter key

* fix: base64 decoding

* fix: improve server stats / rbac

* fix: wrap ssh host json export in hosts array

* feat: auto trim host inputs, fix file manager jump hosts, dashboard prevent duplicates, file manager terminal not size updating, improve left sidebar sorting, hide/show tags, add apperance user profile tab, add new host manager tabs.

* feat: improve terminal connection speed

* fix: sqlite constriant errors and support non-root user (nginx perm issue)

* feat: add beta syntax highlighing to terminal

* feat: update imports and improve admin settings user management

* chore: update translations

* chore: update translations

* feat: Complete light mode implementation with semantic theme system (#450)

- Add comprehensive light/dark mode CSS variables with semantic naming
- Implement theme-aware scrollbars using CSS variables
- Add light mode backgrounds: --bg-base, --bg-elevated, --bg-surface, etc.
- Add theme-aware borders: --border-base, --border-panel, --border-subtle
- Add semantic text colors: --foreground-secondary, --foreground-subtle
- Convert oklch colors to hex for better compatibility
- Add theme awareness to CodeMirror editors
- Update dark mode colors for consistency (background, sidebar, card, muted, input)
- Add Tailwind color mappings for semantic classes

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* fix: syntax errors

* chore: updating/match themes and split admin settings

* feat: add translation workflow and remove old translation.json

* fix: translation workflow error

* fix: translation workflow error

* feat: improve translation system and update workflow

* fix: wrong path for translations

* fix: change translation to flat files

* fix: gh rule error

* chore: auto-translate to multiple languages (#458)

* chore: improve organization and made a few styling changes in host manager

* feat: improve terminal stability and split out the host manager

* fix: add unnversiioned files

* chore: migrate all to use the new theme system

* fix: wrong animation line colors

* fix: rbac implementation general issues (local squash)

* fix: remove unneeded files

* feat: add 10 new langs

* chore: update gitnore

* chore: auto-translate to multiple languages (#459)

* fix: improve tunnel system

* fix: properly split tabs, still need to fix up the host manager

* chore: cleanup files (possible RC)

* feat: add norwegian

* chore: auto-translate to multiple languages (#461)

* fix: small qol fixes and began readme update

* fix: run cleanup script

* feat: add docker docs button

* feat: general bug fixes and readme updates

* fix: translations

* chore: auto-translate to multiple languages (#462)

* fix: cleanup files

* fix: test new translation issue and add better server-stats support

* fix: fix translate error

* chore: auto-translate to multiple languages (#463)

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#465)

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#466)

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#467)

* fix: fix translate mismatching text

* chore: auto-translate to multiple languages (#468)

* feat: add to readme, a few qol changes, and improve server stats in general

* chore: auto-translate to multiple languages (#469)

* feat: turned disk uage into graph and fixed issue with termina console

* fix: electron build error and hide icons when shared

* chore: run clean

* fix: general server stats issues, file manager decoding, ui qol

* fix: add dashboard line breaks

* fix: docker console error

* fix: docker console not loading and mismatched stripped background for electron

* fix: docker console not loading

* chore: docker console not loading in docker

* chore: translate readme to chinese

* chore: match package lock to package json

* chore: nginx config issue for dokcer console

* chore: auto-translate to multiple languages (#470)

---------

Co-authored-by: Tran Trung Kien <kientt13.7@gmail.com>
Co-authored-by: junu <bigdwarf_@naver.com>
Co-authored-by: 송준우 <2484@coreit.co.kr>
Co-authored-by: SlimGary <trash.slim@gmail.com>
Co-authored-by: Nunzio Marfè <nunzio.marfe@protonmail.com>
Co-authored-by: Wesley Reid <starhound@lostsouls.org>
Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Denis <38875137+Medvedinca@users.noreply.github.com>
Co-authored-by: Peet McKinney <68706879+PeetMcK@users.noreply.github.com>
2025-12-31 22:20:12 -06:00
Luke Gustafson
7139290d14 Add GitHub Actions workflow for auto translation
This workflow automates the translation of JSON files using the i18n-ai-translate action, committing changes back to the repository.
2025-12-24 14:04:28 -06:00
Luke Gustafson
f0647dc7c1 Update README to remove 'coming soon' notes 2025-11-26 23:38:58 -06:00
Luke Gustafson
403800f42b Update Termix to version 1.9.0 with new checksum 2025-11-26 19:36:24 -06:00
Luke Gustafson
84ca8080f0 Add termix.rb Cask file 2025-11-26 19:33:15 -06:00
Luke Gustafson
8366c99b0f v1.9.0 (#437)
* fix: Resolve database encryption atomicity issues and enhance debugging (#430)

* fix: Resolve database encryption atomicity issues and enhance debugging

This commit addresses critical data corruption issues caused by non-atomic
file writes during database encryption, and adds comprehensive diagnostic
logging to help debug encryption-related failures.

**Problem:**
Users reported "Unsupported state or unable to authenticate data" errors
when starting the application after system crashes or Docker container
restarts. The root cause was non-atomic writes of encrypted database files:

1. Encrypted data file written (step 1)
2. Metadata file written (step 2)
→ If process crashes between steps 1 and 2, files become inconsistent
→ New IV/tag in data file, old IV/tag in metadata
→ GCM authentication fails on next startup
→ User data permanently inaccessible

**Solution - Atomic Writes:**

1. Write-to-temp + atomic-rename pattern:
   - Write to temporary files (*.tmp-timestamp-pid)
   - Perform atomic rename operations
   - Clean up temp files on failure

2. Data integrity validation:
   - Add dataSize field to metadata
   - Verify file size before decryption
   - Early detection of corrupted writes

3. Enhanced error diagnostics:
   - Key fingerprints (SHA256 prefix) for verification
   - File modification timestamps
   - Detailed GCM auth failure messages
   - Automatic diagnostic info generation

**Changes:**

database-file-encryption.ts:
- Implement atomic write pattern in encryptDatabaseFromBuffer
- Implement atomic write pattern in encryptDatabaseFile
- Add dataSize field to EncryptedFileMetadata interface
- Validate file size before decryption in decryptDatabaseToBuffer
- Enhanced error messages for GCM auth failures
- Add getDiagnosticInfo() function for comprehensive debugging
- Add debug logging for all encryption/decryption operations

system-crypto.ts:
- Add detailed logging for DATABASE_KEY initialization
- Log key source (env var vs .env file)
- Add key fingerprints to all log messages
- Better error messages when key loading fails

db/index.ts:
- Automatically generate diagnostic info on decryption failure
- Log detailed debugging information to help users troubleshoot

**Debugging Info Added:**

- Key initialization: source, fingerprint, length, path
- Encryption: original size, encrypted size, IV/tag prefixes, temp paths
- Decryption: file timestamps, metadata content, key fingerprint matching
- Auth failures: .env file status, key availability, file consistency
- File diagnostics: existence, readability, size validation, mtime comparison

**Backward Compatibility:**
- dataSize field is optional (metadata.dataSize?: number)
- Old encrypted files without dataSize continue to work
- No migration required

**Testing:**
- Compiled successfully
- No breaking changes to existing APIs
- Graceful handling of legacy v1 encrypted files

Fixes data loss issues reported by users experiencing container restarts
and system crashes during database saves.

* fix: Cleanup PR

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/backend/utils/database-file-encryption.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: LukeGus <bugattiguy527@gmail.com>
Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: Merge metadata and DB into 1 file

* fix: Add initial command palette

* Feature/german language support (#431)

* Update translation.json

Fixed some translation issues for German, made it more user friendly and common.

* Update translation.json

added updated block for serverStats

* Update translation.json

Added translations

* Update translation.json

Removed duplicate of "free":"Free"

* feat: Finalize command palette

* fix: Several bug fixes for terminals, server stats, and general feature improvements

* feat: Enhanced security, UI improvements, and animations (#432)

* fix: Remove empty catch blocks and add error logging

* refactor: Modularize server stats widget collectors

* feat: Add i18n support for terminal customization and login stats

- Add comprehensive terminal customization translations (60+ keys) for appearance, behavior, and advanced settings across all 4 languages
- Add SSH login statistics translations
- Update HostManagerEditor to use i18n for all terminal customization UI elements
- Update LoginStatsWidget to use i18n for all UI text
- Add missing logger imports in backend files for improved debugging

* feat: Add keyboard shortcut enhancements with Kbd component

- Add shadcn kbd component for displaying keyboard shortcuts
- Enhance file manager context menu to display shortcuts with Kbd component
- Add 5 new keyboard shortcuts to file manager:
  - Ctrl+D: Download selected files
  - Ctrl+N: Create new file
  - Ctrl+Shift+N: Create new folder
  - Ctrl+U: Upload files
  - Enter: Open/run selected file
- Add keyboard shortcut hints to command palette footer
- Create helper function to parse and render keyboard shortcuts

* feat: Add i18n support for command palette

- Add commandPalette translation section with 22 keys to all 4 languages
- Update CommandPalette component to use i18n for all UI text
- Translate search placeholder, group headings, menu items, and shortcut hints
- Support multilingual command palette interface

* feat: Add smooth transitions and animations to UI

- Add fade-in/fade-out transition to command palette (200ms)
- Add scale animation to command palette on open/close
- Add smooth popup animation to context menu (150ms)
- Add visual feedback for file selection with ring effect
- Add hover scale effect to file grid items
- Add transition-all to list view items for consistent behavior
- Zero JavaScript overhead, pure CSS transitions
- All animations under 200ms for instant feel

* feat: Add button active state and dashboard card animations

- Add active:scale-95 to all buttons for tactile click feedback
- Add hover border effect to dashboard cards (150ms transition)
- Add pulse animation to dashboard loading states
- Pure CSS transitions with zero JavaScript overhead
- Improves enterprise-level feel of UI

* feat: Add smooth macOS-style page transitions

- Add fullscreen crossfade transition for login/logout (300ms fade-out + 400ms fade-in)
- Add slide-in-from-right animation for all page switches (Dashboard, Terminal, SSH Manager, Admin, Profile)
- Fix TypeScript compilation by adding esModuleInterop to tsconfig.node.json
- Pass handleLogout from DesktopApp to LeftSidebar for consistent transition behavior

All page transitions now use Tailwind animate-in utilities with 300ms duration for smooth, native-feeling UX

* fix: Add key prop to force animation re-trigger on tab switch

Each page container now has key={currentTab} to ensure React unmounts and remounts the element on every tab switch, properly triggering the slide-in animation

* revert: Remove page transition animations

Page switching animations were not noticeable enough and felt unnecessary.
Keep only the login/logout fullscreen crossfade transitions which provide clear visual feedback for authentication state changes

* feat: Add ripple effect to login/logout transitions

Add three-layer expanding ripple animation during fadeOut phase:
- Ripples expand from screen center using primary theme color
- Each layer has staggered delay (0ms, 150ms, 300ms) for wave effect
- Ripples fade out as they expand to create elegant visual feedback
- Uses pure CSS keyframe animation, no external libraries

Total animation: 800ms ripple + 300ms screen fade

* feat: Add smooth TERMIX logo animation to transitions

Changes:
- Extend transition duration from 300ms/400ms to 800ms/600ms for more elegant feel
- Reduce ripple intensity from /20,/15,/10 to /8,/5 for subtlety
- Slow down ripple animation from 0.8s to 2s with cubic-bezier easing
- Add centered TERMIX logo with monospace font and subtitle
- Logo fades in from 80% scale, holds, then fades out at 110% scale
- Total effect: 1.2s logo animation synced with 2s ripple waves

Creates a premium, branded transition experience

* feat: Enhance transition animation with premium details

Timing adjustments:
- Extend fadeOut from 800ms to 1200ms
- Extend fadeIn from 600ms to 800ms
- Slow background fade to 700ms for elegance

Visual enhancements:
- Add 4-layer ripple waves (10%, 7%, 5%, 3% opacity) with staggered delays
- Ripple animation extended to 2.5s with refined opacity curve
- Logo blur effect: starts at 8px, sharpens to 0px, exits at 4px
- Logo glow effect: triple-layer text-shadow using primary theme color
- Increase logo size from text-6xl to text-7xl
- Subtitle delayed fade-in from bottom with smooth slide animation

Creates a cinematic, polished brand experience

* feat: Redesign login page with split-screen cinematic layout

Major redesign of authentication page:

Left Side (40% width):
- Full-height gradient background using primary theme color
- Large TERMIX logo with glow effect
- Subtitle and tagline
- Infinite animated ripple waves (3 layers)
- Hidden on mobile, shows brand identity

Right Side (60% width):
- Centered glassmorphism card with backdrop blur
- Refined tab switcher with pill-style active state
- Enlarged title with gradient text effect
- Added welcome subtitles for better UX
- Card slides in from bottom on load
- All existing functionality preserved

Visual enhancements:
- Tab navigation: segmented control style in muted container
- Active tab: white background with subtle shadow
- Smooth 200ms transitions on all interactions
- Card: rounded-2xl, shadow-xl, semi-transparent border

Creates premium, modern login experience matching transition animations

* feat: Update login page theme colors and add i18n support

- Changed login page gradient from blue to match dark theme colors
- Updated ripple effects to use theme primary color
- Added i18n translation keys for login page (auth.tagline, auth.description, auth.welcomeBack, auth.createAccount, auth.continueExternal)
- Updated all language files (en, zh, de, ru, pt-BR) with new translations
- Fixed TypeScript compilation issues by clearing build cache

* refactor: Use shadcn Tabs component and fix modal styling

- Replace custom tab navigation with shadcn Tabs component
- Restore border-2 border-dark-border for modal consistency
- Remove circular icon from login success message
- Simplify authentication success display

* refactor: Remove ripple effects and gradient from login page

- Remove animated ripple background effects
- Remove gradient background, use solid color (bg-dark-bg-darker)
- Remove text-shadow glow effect from logo
- Simplify brand showcase to clean, minimal design

* feat: Add decorative slash and remove subtitle from login page

- Add decorative slash divider with gradient lines below TERMIX logo
- Remove subtitle text (welcomeBack and createAccount)
- Simplify page title to show only the main heading

* feat: Add diagonal line pattern background to login page

- Replace decorative slash with subtle diagonal line pattern background
- Use repeating-linear-gradient at 45deg angle
- Set very low opacity (0.03) for subtle effect
- Pattern uses theme primary color

* fix: Display diagonal line pattern on login background

- Combine background color and pattern in single style attribute
- Use white semi-transparent lines (rgba 0.03 opacity)
- 45deg angle, 35px spacing, 2px width
- Remove separate overlay div to ensure pattern visibility

* security: Fix user enumeration vulnerability in login

- Unify error messages for invalid username and incorrect password
- Both return 401 status with 'Invalid username or password'
- Prevent attackers from enumerating valid usernames
- Maintain detailed logging for debugging purposes
- Changed from 404 'User not found' to generic auth failure message

* security: Add login rate limiting to prevent brute force attacks

- Implement LoginRateLimiter with IP and username-based tracking
- Block after 5 failed attempts within 15 minutes
- Lock account/IP for 15 minutes after threshold
- Automatic cleanup of expired entries every 5 minutes
- Track remaining attempts in logs for monitoring
- Return 429 status with remaining time on rate limit
- Reset counters on successful login
- Dual protection: both IP-based and username-based limits

* French translation (#434)

* Adding French Language

* Enhancements

* feat: Replace the old ssh tools system with a new dedicated sidebar

* fix: Merge zac/luke

* fix: Finalize new sidebar, improve and loading animations

* Added ability to close non-primary tabs involved in a split view (#435)

* fix: General bug fixes/small feature improvements

* feat: General UI improvements and translation updates

* fix: Command history and file manager styling issues

* feat: General bug fixes, added server stat commands, improved split screen, link accounts, etc

* fix: add Accept header for OIDC callback request (#436)

* Delete DOWNLOADS.md

* fix: add Accept header for OIDC callback request

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* fix: More bug fixes and QOL fixes

* fix: Server stats not respecting interval and fixed SSH toool type issues

* fix: Remove github links

* fix: Delete account spacing

* fix: Increment version

* fix: Unable to delete hosts and add nginx for terminal

* fix: Unable to delete hosts

* fix: Unable to delete hosts

* fix: Unable to delete hosts

* fix: OIDC/local account linking breaking both logins

* chore: File cleanup

* feat: Max terminal tab size and save current file manager sorting type

* fix: Terminal display issue, migrate host editor to use combobox

* feat: Add snippet folder/customization system

* fix: Fix OIDC linking and prep release

* fix: Increment version

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Max <herzmaximilian@gmail.com>
Co-authored-by: SlimGary <trash.slim@gmail.com>
Co-authored-by: jarrah31 <jarrah31@gmail.com>
Co-authored-by: Kf637 <mail@kf637.tech>
2025-11-17 09:46:05 -06:00
Luke Gustafson
38a59f3579 Delete DOWNLOADS.md 2025-11-11 11:18:06 -06:00
LukeGus
9ca7df6542 fix: Auth ref error 2025-11-05 11:43:14 -06:00
LukeGus
a27d8f264e fix: Production build error 2025-11-05 10:40:46 -06:00
Luke Gustafson
8ec22b2177 v1.8.0 (#429)
* Dev 1.8.0 (#399)

* Feature request: Add delete confirmation dialog to file manager (#344)

* Feature request: Add delete confirmation dialog to file manager

- Added confirmation dialog before deleting files/folders
- Users must confirm deletion with a warning message
- Works for both Delete key and right-click delete
- Shows different messages for single file, folder, or multiple items
- Includes permanent deletion warning
- Follows existing design patterns using confirmWithToast

* Adds confirmation for deletion of items including folders

Updates the file deletion confirmation logic to distinguish between
deleting multiple items with or without folders. Introduces a new
translation string for a clearer user prompt when folders and their
contents are included in the deletion.

Improves clarity and reduces user error when performing bulk deletions.

* feat: Add Chinese translations for delete confirmation messages

* Adds camelCase support for encrypted field mappings (#342)

Extends encrypted field mappings to include camelCase variants
to support consistency and compatibility with different naming
conventions. Updates reverse mappings for Drizzle ORM to allow
conversion between camelCase and snake_case field names.

Improves integration with systems using mixed naming styles.

* Run code cleanup, add sidebar persistence, fix OIDC credentials, force SSH password.

* Fix snake case mismatching

* Add real client IP

* Fix OIDC credential persistence issue

The issue was that OIDC users were getting a new random Data Encryption Key (DEK)
on every login, which made previously encrypted credentials inaccessible.

Changes:
- Modified setupOIDCUserEncryption() to persist the DEK encrypted with a system-derived key
- Updated authenticateOIDCUser() to properly retrieve and use the persisted DEK
- Ensured OIDC users now have the same encryption persistence as password-based users

This fix ensures that credentials created by OIDC users remain accessible across
multiple login sessions.

* Fix race condition and remove redundant kekSalt for OIDC users

Critical fixes:

1. Race Condition Mitigation:
   - Added read-after-write verification in setupOIDCUserEncryption()
   - Ensures session uses the DEK that's actually in the database
   - Prevents data loss when concurrent logins occur for new OIDC users
   - If race is detected, discards generated DEK and uses stored one

2. Remove Redundant kekSalt Logic:
   - Removed unnecessary kekSalt generation and checks for OIDC users
   - kekSalt is not used in OIDC key derivation (uses userId as salt)
   - Reduces database operations from 4 to 2 per authentication
   - Simplifies code and removes potential confusion

3. Improved Error Handling:
   - systemKey cleanup moved to finally block
   - Ensures sensitive key material is always cleared from memory

These changes ensure data consistency and prevent potential data loss
in high-concurrency scenarios.

* Cleanup OIDC pr and run prettier

* Replace jetbrains mono with caskaydia cove

* Fix alert issues

* Finalize font update

* Feature/german language support (#374)

* v1.7.2 (#364)

* Feature request: Add delete confirmation dialog to file manager (#344)

* Feature request: Add delete confirmation dialog to file manager

- Added confirmation dialog before deleting files/folders
- Users must confirm deletion with a warning message
- Works for both Delete key and right-click delete
- Shows different messages for single file, folder, or multiple items
- Includes permanent deletion warning
- Follows existing design patterns using confirmWithToast

* Adds confirmation for deletion of items including folders

Updates the file deletion confirmation logic to distinguish between
deleting multiple items with or without folders. Introduces a new
translation string for a clearer user prompt when folders and their
contents are included in the deletion.

Improves clarity and reduces user error when performing bulk deletions.

* feat: Add Chinese translations for delete confirmation messages

* Adds camelCase support for encrypted field mappings (#342)

Extends encrypted field mappings to include camelCase variants
to support consistency and compatibility with different naming
conventions. Updates reverse mappings for Drizzle ORM to allow
conversion between camelCase and snake_case field names.

Improves integration with systems using mixed naming styles.

* Run code cleanup, add sidebar persistence, fix OIDC credentials, force SSH password.

* Fix snake case mismatching

* Add real client IP

* Fix OIDC credential persistence issue

The issue was that OIDC users were getting a new random Data Encryption Key (DEK)
on every login, which made previously encrypted credentials inaccessible.

Changes:
- Modified setupOIDCUserEncryption() to persist the DEK encrypted with a system-derived key
- Updated authenticateOIDCUser() to properly retrieve and use the persisted DEK
- Ensured OIDC users now have the same encryption persistence as password-based users

This fix ensures that credentials created by OIDC users remain accessible across
multiple login sessions.

* Fix race condition and remove redundant kekSalt for OIDC users

Critical fixes:

1. Race Condition Mitigation:
   - Added read-after-write verification in setupOIDCUserEncryption()
   - Ensures session uses the DEK that's actually in the database
   - Prevents data loss when concurrent logins occur for new OIDC users
   - If race is detected, discards generated DEK and uses stored one

2. Remove Redundant kekSalt Logic:
   - Removed unnecessary kekSalt generation and checks for OIDC users
   - kekSalt is not used in OIDC key derivation (uses userId as salt)
   - Reduces database operations from 4 to 2 per authentication
   - Simplifies code and removes potential confusion

3. Improved Error Handling:
   - systemKey cleanup moved to finally block
   - Ensures sensitive key material is always cleared from memory

These changes ensure data consistency and prevent potential data loss
in high-concurrency scenarios.

* Cleanup OIDC pr and run prettier

---------

Co-authored-by: Ved Prakash <54140516+thorved@users.noreply.github.com>

* Fix typos and improve wording in README.md

Corrected grammar and punctuation in README.

* Image 7.png

* Rename 3gi3b3os5psf1.png to Image 7.png

* Add video demonstration to README

Added a video demonstration to the README.

* Delete repo-images/Image 7.png

* Add files via upload

* Delete repo-images/Image 7.png

* Add files via upload

* Initial German translation

* German translation (#281)

* German translation (#281)

* Implementation of German language support  (#281)

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: Karmaa <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Ved Prakash <54140516+thorved@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add germanm support

* Fix SSH Key Password (keyPassword) Field Naming Mismatch Between Frontend and Backend (#375)

* Refactor key_password to keyPassword for consistency across SSH routes

* Standardizes keyPassword field handling and simplifies auth field logic

Standardizes the handling of the `keyPassword` field by converting
`key_password` to camelCase and ensuring consistent output while
preserving resolved credentials. Removes redundant snake_case
fields to avoid duplication.

Simplifies UI handling of authentication fields by allowing
non-relevant fields to persist, delegating filtering logic to the
backend for cleaner and more maintainable code.

Improves code clarity and aligns with consistent data handling
practices.

* Cleanup code + resolve conversion logic

---------

Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* Feature disable password login (#378)

* Add admin toggle to disable password login

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/ui/main-axios.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/ui/Desktop/Admin/AdminSettings.tsx

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add SSH TOTP authentication support (#350)

* Add SSH TOTP authentication support

- Implement keyboard-interactive authentication for SSH connections
- Add TOTP dialog component for Terminal and File Manager
- Handle TOTP prompts in WebSocket and HTTP connections
- Disable Server Stats for TOTP-enabled servers
- Add i18n support for TOTP-related messages

* Update src/backend/ssh/server-stats.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/ssh/file-manager.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Karmaa <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add terminal snippets feature with sidebar UI (#377)

* Add terminal snippets feature with sidebar UI

- Add snippets CRUD API endpoints and database schema
- Implement snippets sidebar accessible from TopNavbar
- Add copy to clipboard functionality
- Include tooltips and optimized styling
- Add English and Chinese translations

* Update src/backend/database/routes/snippets.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Feature engineering improvements (#376)

* chore: add engineering improvements

- Configure Prettier with unified code style rules
- Add husky + lint-staged for automated pre-commit checks
- Add commitlint to enforce conventional commit messages
- Add PR check workflow for CI automation
- Auto-format all files with Prettier
- Fix TypeScript any types in field-crypto.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: enhance development environment

- Add .editorconfig for unified editor settings
- Add .nvmrc to specify Node.js version (20)
- Add useful npm scripts: format, format:check, lint, lint:fix, type-check

* chore: add IDE and Git configuration

- Add VS Code workspace settings for consistent development experience
- Add VS Code extension recommendations (ESLint, Prettier, EditorConfig)
- Add .gitattributes to enforce LF line endings

* refactor: clean up unused variables and empty blocks

- database.ts: Remove unused variables (authManager, format, HTTPS_PORT, etc.)
- database.ts: Fix empty catch blocks with descriptive comments
- database.ts: Add eslint-disable for required middleware parameter
- db/index.ts: Remove unused variables and fix empty catch blocks
- Temporarily remove ESLint from pre-commit to allow incremental fixes

Reduced total errors from 947 to 913 (34 fixes)

* refactor: clean up unused variables and empty blocks in routes

Routes updated:
- credentials.ts: Remove 12 unused variables/imports
- alerts.ts: Remove 1 unused variable
- users.ts: Remove 9 unused variables/imports

Changes:
- Remove unused imports (NextFunction, jwt, UserCrypto, detectKeyType)
- Fix empty catch blocks with descriptive comments
- Prefix reserved parameters with underscore
- Clean up unused error variables in catch blocks

Reduced errors from 913 to 886 (27 fixes)

* refactor: clean up unused variables in routes/ssh.ts

- Remove unused imports (NextFunction, jwt)
- Remove 6 unused variables (result, updateResult, name x3)
- All 8 no-unused-vars errors fixed

* refactor: clean up unused variables and empty blocks in file-manager.ts

- Remove 22 unused variables (linkCount, hostId, userId, content, escapedTempFile, index, code)
- Fix 1 empty catch block
- Simplify multiple route handlers by removing unused destructured parameters

Reduced errors from 878 to 855 (23 fixes)

* refactor: clean up unused variables and empty blocks in utils

database-migration.ts:
- Remove 3 unused variables (encryptedSize, totalOriginalRows, totalMemoryRows)

lazy-field-encryption.ts:
- Fix 6 empty catch blocks with descriptive comments
- Keep error variables where they are used in logging

tunnel.ts:
- Fix multiple empty catch blocks
- Remove empty else blocks
- Partially fixed (10/21 issues resolved)

Reduced errors from 855 to 833 (22 fixes)

* fix: restore error variable in catch block for logging

Fix TypeScript error where error variable was removed from catch block
but still used in logging statements. The error variable is needed for
proper error logging and re-throwing.

* fix: clean up tunnel.ts empty blocks and unused variables

移除了 tunnel.ts 中的空块和未使用的变量:
- 移除 2 个空 else 块
- 修复 2 个空 if 块并添加注释
- 修复空错误处理器并添加注释
- 将未使用的 err 参数重命名为 _err

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty blocks and unused variables in backend utils

修复了后端工具文件中的空块和未使用的变量:
- auth-manager.ts: 移除空 else 块
- system-crypto.ts: 修复空 catch 块并添加注释
- starter.ts: 修复空 catch 块并添加注释
- server-stats.ts: 将未使用的 reject 参数重命名为 _reject
- credentials.ts: 将 connectionTimeout 从 let 改为 const

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in frontend components

修复了前端组件中的空 catch 块:
- Tunnel.tsx: 修复空 catch 块并添加注释
- ServerConfig.tsx: 修复空 catch 块并添加注释
- TerminalKeyboard.tsx: 修复空 catch 块并添加注释
- system-crypto.ts: 修复遗漏的空 catch 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in backend utilities

修复了后端工具文件中的 10 个空 catch 块:
- system-crypto.ts: 修复 1 个空 catch 块
- server-stats.ts: 修复 4 个空 catch 块
- auto-ssl-setup.ts: 修复 1 个空 catch 块
- ssh-key-utils.ts: 修复 4 个空 catch 块

所有空块都添加了描述性注释说明为何忽略错误。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in UI hooks and components

修复了 5 个 UI 组件和 hooks 中的空 catch 块:
- useDragToSystemDesktop.ts: 修复 2 个空 catch 块
- HomepageAuth.tsx: 修复 1 个空 catch 块
- HostManagerEditor.tsx: 修复 2 个空 catch 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty blocks in file manager and credential editor

修复了 5 个空块:
- FileManagerGrid.tsx: 移除 1 个空 else 块和 1 个空 if 块
- CredentialEditor.tsx: 修复 1 个空 catch 块,移除 2 个空 if/else 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up all empty catch blocks in Terminal components

修复了 Terminal 组件中的所有 8 个空 catch 块:
- Desktop/Apps/Terminal/Terminal.tsx: 修复 5 个空 catch 块
- Mobile/Apps/Terminal/Terminal.tsx: 修复 3 个空 catch 块

所有空块都添加了描述性注释。这是空块修复的最后一批。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: remove useless try/catch wrappers

移除了 3 个无用的 try/catch 包装器:
- users.ts: 移除只重新抛出错误的外层 try/catch
- FileManager.tsx: 移除只重新抛出错误的内层 try/catch
- DiffViewer.tsx: 移除只重新抛出错误的内层 try/catch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: remove unused imports and mark unused parameters

移除了未使用的导入和标记未使用的参数:
- auto-ssl-setup.ts: 移除未使用的 crypto 导入
- user-crypto.ts: 移除未使用的 users 导入
- user-data-import.ts: 移除未使用的 nanoid 导入
- simple-db-ops.ts: 标记未使用的 userId 和 tableName 参数

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unnecessary escape characters in regex patterns

移除了正则表达式中不必要的转义字符:
- users.ts: 修复 5 个 \/ 不必要的转义
- TabContext.tsx: 修复 1 个 \/ 不必要的转义

在字符串形式的正则表达式中,/ 不需要转义。

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>

* feat: enhance server stats widgets and fix TypeScript/ESLint errors (#394)

* feat: add draggable server stats dashboard with customizable widgets

* fix: widget deletion and layout persistence issues

* fix: improve widget deletion UX and add debug logs for persistence

* fix: resolve widget deletion and layout persistence issues

- Add drag handles to widget title bars for precise drag control
- Prevent delete button from triggering drag via event stopPropagation
- Include statsConfig field in all GET/PUT API responses
- Remove debug console logs from production code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: complete statsConfig field support across all API routes

- Add statsConfig to POST /db/host (create) route
- Add statsConfig to all GET routes for consistent API responses
- Remove incorrect statsConfig schema from HostManagerEditor
- statsConfig is now only managed by Server page layout editor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: add statsConfig to metrics API response

- Add statsConfig field to SSHHostWithCredentials interface
- Include statsConfig in resolveHostCredentials baseHost object
- Ensures /metrics/:id API returns complete host configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: include statsConfig in SSH host create/update requests

The statsConfig field was being dropped by createSSHHost and updateSSHHost
functions in main-axios.ts, preventing layout customization from persisting.

Fixed by adding statsConfig to the submitData object in both functions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: refactor server stats widgets into modular structure

Created dedicated widgets directory with individual components:
- CpuWidget, MemoryWidget, DiskWidget as separate components
- Widget registry for centralized widget configuration
- AddWidgetDialog for user-friendly widget selection
- Updated Server.tsx to use modular widget system

Benefits:
- Better code organization and maintainability
- Easier to add new widget types in the future
- Centralized widget metadata and configuration
- User can now add widgets via dialog interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: exit edit mode after saving layout

* feat: add customizable widget sizes with chart visualizations

Add three-tier size system (small/medium/large) for server stats widgets.
Integrate recharts library for visualizing trends in large widgets with
line charts (CPU), area charts (Memory), and radial bar charts (Disk).
Fix layout overflow issues with proper flexbox patterns.

* refactor: simplify server stats widget system

Replaced complex drag-and-drop grid layout with simple checkbox-based
configuration and static responsive grid display.

- Removed react-grid-layout dependency and 6 related packages
- Simplified StatsConfig from complex Widget objects to simple array
- Added Statistics tab in HostManagerEditor for checkbox selection
- Refactored Server.tsx to use CSS Grid instead of ResponsiveGridLayout
- Simplified widget components by removing edit mode and size selection
- Deleted unused AddWidgetDialog and registry files
- Fixed statsConfig serialization in backend routes

Net result: -787 lines of code, cleaner architecture.

* feat: add system, uptime, network and processes widgets

Add four new server statistics widgets:
- SystemWidget: displays hostname, OS, and kernel information
- UptimeWidget: shows server total uptime with formatted display
- NetworkWidget: lists network interfaces with IP and status
- ProcessesWidget: displays top processes by CPU usage

Backend changes:
- Extended SSH metrics collection to gather network, uptime, process, and system data
- Added commands to parse /proc/uptime, ip addr, ps aux output

Frontend changes:
- Created 4 new widget components with consistent styling
- Updated widget type definitions and HostManagerEditor
- Unified all widget heights to 280px for consistent layout
- Added translations for all new widgets (EN/ZH)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: improve widget styling and UX consistency

Enhance all server stats widgets with improved styling and user experience:

Widget improvements:
- Fix hardcoded titles, now use i18n translations for all widgets
- Improve data formatting with consistent translation keys
- Enhance empty state displays with better visual hierarchy
- Add smooth hover transitions and visual feedback
- Standardize spacing and layout patterns across widgets

Specific optimizations:
- CPU: Use translated load average display
- Memory: Translate "Free" label
- Disk: Translate "Available" label
- System: Improve icon colors and spacing consistency
- Network: Better empty state, enhanced card styling
- Processes: Improved card borders and spacing

Visual polish:
- Unified icon sizing and opacity for empty states
- Consistent border radius (rounded-lg)
- Better hover states with subtle transitions
- Enhanced font weights for improved readability

* fix: replace explicit any types with proper TypeScript types

- Replace 'any' with 'unknown' in catch blocks and add type assertions
- Create explicit interfaces for complex objects (HostConfig, TabData, TerminalHandle)
- Fix window/document object type extensions
- Update Electron API type definitions
- Improve type safety in database routes and utilities
- Add proper types to Terminal components (Desktop & Mobile)
- Fix navigation component types (TopNavbar, LeftSidebar, AppView)

Reduces TypeScript lint errors from 394 to 358 (-36 errors)
Fixes 45 @typescript-eslint/no-explicit-any violations

* fix: replace explicit any types with proper TypeScript types

- Create explicit interfaces for Request extensions (AuthenticatedRequest, RequestWithHeaders)
- Add type definitions for WebSocket messages and SSH connection data
- Use generic types in DataCrypto methods instead of any return types
- Define proper interfaces for file manager data structures
- Replace catch block any types with unknown and proper type assertions
- Add HostConfig and TabData interfaces for Server component

Fixes 32 @typescript-eslint/no-explicit-any violations across 5 files

* fix: resolve 6 TypeScript compilation errors

Fixed field name mismatches and generic type issues:
- database.ts: Changed camelCase to snake_case for key_password, private_key, public_key fields
- simple-db-ops.ts: Added explicit generic type parameters to DataCrypto method calls

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve unused variables in backend utils

Fixed @typescript-eslint/no-unused-vars errors in:
- starter.ts: removed unused error variables (2 fixes)
- auto-ssl-setup.ts: removed unused error variable (1 fix)
- ssh-key-utils.ts: removed unused error variables (3 fixes)
- user-crypto.ts: removed unused error variables (5 fixes)
- data-crypto.ts: removed unused plaintextFields and error variables (2 fixes)
- simple-db-ops.ts: removed unused parameters _userId and _tableName (2 fixes)

Total: 15 unused variable errors fixed

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variable in terminal.ts

Fixed @typescript-eslint/no-unused-vars errors:
- Removed unused userPayload variable (line 123)
- Removed unused cols and rows from destructuring (line 348)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve unused variables in server-stats.ts

Fixed @typescript-eslint/no-unused-vars errors:
- Removed unused _reject parameter in Promise (line 64)
- Removed shadowed now variable in pollStatusesOnce (line 1130)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variables in tunnel.ts

Removed 5 unused variables:
- Removed unused data parameter from stdout event handler
- Removed hasSourcePassword, hasSourceKey, hasEndpointPassword, hasEndpointKey variables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variables in main-axios.ts

Removed 8 unused variables:
- Removed unused type imports (Credential, CredentialData, HostInfo, ApiResponse)
- Removed unused apiPort variable
- Removed unused error variables in 3 catch blocks

* fix: remove unused variables in terminal.ts and starter.ts

Removed 2 unused variables:
- Removed unused JWTPayload type import from terminal.ts
- Removed unused _promise parameter from starter.ts

* fix: remove unused variables in sidebar.tsx

Removed 9 unused variables:
- Removed 5 unused Sheet component imports
- Removed unused SIDEBAR_WIDTH_MOBILE constant
- Removed 3 unused variables from useSidebar destructuring

* fix: remove 13 unused variables in frontend files

- version-check-modal.tsx: removed 4 unused imports and functions
- main.tsx: removed unused isMobile state
- AdminSettings.tsx: removed 8 unused imports and error variables

* fix: remove 28 unused variables across frontend components

Cleaned up unused imports, state variables, and function parameters:
- CredentialsManager.tsx: removed 8 unused variables (Sheet/Select imports)
- FileManager.tsx: removed 10 unused variables (icons, state, functions)
- Terminal.tsx (Desktop): removed 5 unused variables (state, handlers)
- Terminal.tsx (Mobile): removed 5 unused variables (imports, state)

Reduced lint errors from 271 to 236 (35 errors fixed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 10 unused variables in File Manager and config files

Cleaned up more unused imports, parameters, and variables:
- FileManagerGrid.tsx: removed 4 unused variables (params, function)
- FileManagerContextMenu.tsx: removed Share import
- FileManagerSidebar.tsx: removed onLoadDirectory parameter
- DraggableWindow.tsx: removed Square import
- FileWindow.tsx: removed updateWindow variable
- ServerConfig.tsx: removed 2 unused error parameters

Reduced lint errors from 236 to 222 (14 errors fixed total)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 7 unused variables in widgets and Homepage components

Cleaned up unused imports, parameters, and variables:
- DiskWidget.tsx: removed metricsHistory parameter
- FileManagerContextMenu.tsx: removed ExternalLink import
- Homepage.tsx: removed useTranslation import
- HomepageAlertManager.tsx: removed loading variable
- HomepageAuth.tsx: removed setCookie import (Desktop & Mobile)
- HompageUpdateLog.tsx: removed err parameter

Reduced lint errors from 222 to 216 (6 errors fixed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 8 unused variables in File Manager and Host Manager components

Cleaned up unused imports, state variables, and function parameters:
- DiffViewer.tsx: removed unused error parameter in catch block
- FileViewer.tsx: removed ReactPlayer import, unused originalContent state,
  node parameters from markdown code components, audio variable
- HostManager.tsx: removed onSelectView and updatedHost parameters
- TunnelViewer.tsx: removed TunnelConnection import

Reduced lint errors from 271 to 208 (63 errors fixed total)

* fix: remove 7 unused variables in UI hooks and components

Cleaned up unused parameters and functions:
- status/index.tsx: removed unused className parameter from StatusIndicator
- useDragToDesktop.ts: removed unused sshHost parameter and from dependency
  arrays (4 occurrences)
- useDragToSystemDesktop.ts: removed unused sshHost parameter and
  getLastSaveDirectory function (29 lines removed)

Continued reducing frontend lint errors

* fix: remove 2 unused variables in hooks and TabContext

- useDragToDesktop.ts: removed unused onSuccess in dragFolderToDesktop
- TabContext.tsx: removed unused useTranslation import and t variable

Continued reducing frontend lint errors

* fix: remove 2 unused variables in Homepage component

- Removed unused isAdmin state variable (changed to setter only)
- Removed unused jwt variable by inlining getCookie check

Continued reducing frontend lint errors

* fix: remove 3 unused variables in Mobile navigation components

- Host.tsx: removed unused Server icon import
- LeftSidebar.tsx: removed unused setHostsLoading setter and err parameter

Continued reducing frontend lint errors

* fix: remove 9 unused variables across multiple files

Fixed unused variables in:
- database-file-encryption.ts: removed currentFingerprint (backend)
- FileManagerContextMenu.tsx: removed ExternalLink import, hasDirectories
- frontend-logger.ts: removed 5 unused shortUrl variables

Continued reducing lint errors

* fix: remove 18 unused variables across 4 files

- HostManagerViewer.tsx: remove 9 unused error variables and parameters
- HostManagerEditor.tsx: remove WidgetType import, hosts/loading states, error variable
- CredentialViewer.tsx: remove 3 unused error variables
- Server.tsx: remove 2 unused error variables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 9 unused variables across 4 files

- SnippetsSidebar.tsx: remove 3 unused err variables in catch blocks
- TunnelViewer.tsx: remove 2 unused parameters from callback
- DesktopApp.tsx: remove getCookie import and unused state variables
- HomepageAlertManager.tsx: remove 2 unused err variables in catch blocks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 10 unused variables and imports across 4 navigation files

- Homepage.tsx: remove unused username state variable
- AppView.tsx: remove 3 unused Lucide icon imports
- TopNavbar.tsx: remove 4 unused Accordion component imports
- LeftSidebar.tsx: remove 2 unused variables (err, jwt)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 5 unused variables across 4 user/credentials files

- PasswordReset.tsx: remove unused result variable
- UserProfile.tsx: remove unused Key import and err variable
- version-check-modal.tsx: remove unused setVersionDismissed setter
- CredentialsManager.tsx: remove unused e parameter from handleDragLeave

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 2 unused variables in FileViewer and TerminalWindow

- FileViewer.tsx: remove unused node parameter from code component
- TerminalWindow.tsx: remove unused handleMinimize function

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 10 unused variables in HomepageAuth.tsx

Removed unused variables:
- getCookie import
- dbError prop
- visibility state and toggleVisibility
- error state variable
- result variable in handleInitiatePasswordReset
- token URL parameter
- err parameters in catch blocks
- retryDatabaseConnection function
- Multiple setError(null) calls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 9 unused variables across multiple files

Files fixed:
- DesktopApp.tsx: Removed _nextView parameter
- TerminalWindow.tsx: Removed minimizeWindow
- Mobile Host.tsx: Removed Server import
- Mobile LeftSidebar.tsx: Removed setHostsLoading, err in catch
- Desktop LeftSidebar.tsx: Removed getCookie, setCookie, onSelectView, getView, setHostsLoading

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 10 unused variables in Mobile files

Files fixed:
- MobileApp.tsx: Removed getCookie, removeTab, isAdmin, id, err parameters
- Mobile/HomepageAuth.tsx: Removed getCookie, error state, result, token, err parameters

All @typescript-eslint/no-unused-vars errors in frontend now resolved!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused t variable in TabContext

Removed useTranslation import and unused t variable
in Mobile TabContext.tsx

All @typescript-eslint/no-unused-vars errors now resolved!
Total fixed: 154 unused variables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve TypeScript and ESLint errors across the codebase

- Fixed @typescript-eslint/no-unused-vars errors (31 instances)
- Fixed @typescript-eslint/no-explicit-any errors in backend (~22 instances)
- Fixed @typescript-eslint/no-explicit-any errors in frontend (~60 instances)
- Fixed prefer-const errors (5 instances)
- Fixed no-empty-object-type and rules-of-hooks errors
- Added proper type assertions for database operations
- Improved type safety in authentication and encryption modules
- Enhanced type definitions for API routes and SSH operations

All TypeScript compilation errors resolved. Application builds and runs successfully.

* fix: disable react-refresh/only-export-components rule for component files

Disable the react-refresh/only-export-components ESLint rule in files
that export both components and related utilities (hooks, types,
constants). This is a pragmatic solution to maintain code organization
without splitting files unnecessarily.

* style: fix prettier formatting issues

Fix code style issues in translation file and TOTP dialog component
to pass CI prettier check.

* chore: fix rollup optional dependencies installation in CI

Add step to force reinstall rollup after npm ci to fix the known npm
bug with optional dependencies on Linux x64 platform.

* chore: fix lightningcss optional dependencies in CI

Add lightningcss to the force reinstall step to fix npm optional
dependencies bug for both rollup and lightningcss on Linux x64.

* chore: fix npm optional dependencies bug in CI

Remove package-lock.json and node_modules before install to properly
handle optional dependencies for rollup, lightningcss, and tailwindcss
native bindings on Linux x64 platform as recommended by npm.

* Update src/types/index.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Set terminal environment variables for SSH

Added environment variables for terminal configuration.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Karmaa <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* feat: begin macOS support

* Delete .github/ISSUE_TEMPLATE/bug_report.yml

* Delete .github/ISSUE_TEMPLATE/feature_request.yml

* Add issue template configuration for support links

* Revise support instructions in README.md

Updated support section with new issue reporting instructions and clarified Discord support response times.

* Update repository links and badge URLs in README

* Update links to new orgnanization

* Migrate workflows to Blacksmith (#421)

Co-authored-by: blacksmith-sh[bot] <157653362+blacksmith-sh[bot]@users.noreply.github.com>

* Feature request: Add delete confirmation dialog to file manager (#344)

* Feature request: Add delete confirmation dialog to file manager

- Added confirmation dialog before deleting files/folders
- Users must confirm deletion with a warning message
- Works for both Delete key and right-click delete
- Shows different messages for single file, folder, or multiple items
- Includes permanent deletion warning
- Follows existing design patterns using confirmWithToast

* Adds confirmation for deletion of items including folders

Updates the file deletion confirmation logic to distinguish between
deleting multiple items with or without folders. Introduces a new
translation string for a clearer user prompt when folders and their
contents are included in the deletion.

Improves clarity and reduces user error when performing bulk deletions.

* feat: Add Chinese translations for delete confirmation messages

* Adds camelCase support for encrypted field mappings (#342)

Extends encrypted field mappings to include camelCase variants
to support consistency and compatibility with different naming
conventions. Updates reverse mappings for Drizzle ORM to allow
conversion between camelCase and snake_case field names.

Improves integration with systems using mixed naming styles.

* Run code cleanup, add sidebar persistence, fix OIDC credentials, force SSH password.

* Fix snake case mismatching

* Fix race condition and remove redundant kekSalt for OIDC users

Critical fixes:

1. Race Condition Mitigation:
   - Added read-after-write verification in setupOIDCUserEncryption()
   - Ensures session uses the DEK that's actually in the database
   - Prevents data loss when concurrent logins occur for new OIDC users
   - If race is detected, discards generated DEK and uses stored one

2. Remove Redundant kekSalt Logic:
   - Removed unnecessary kekSalt generation and checks for OIDC users
   - kekSalt is not used in OIDC key derivation (uses userId as salt)
   - Reduces database operations from 4 to 2 per authentication
   - Simplifies code and removes potential confusion

3. Improved Error Handling:
   - systemKey cleanup moved to finally block
   - Ensures sensitive key material is always cleared from memory

These changes ensure data consistency and prevent potential data loss
in high-concurrency scenarios.

* Cleanup OIDC pr and run prettier

* Feature/german language support (#374)

* v1.7.2 (#364)

* Feature request: Add delete confirmation dialog to file manager (#344)

* Feature request: Add delete confirmation dialog to file manager

- Added confirmation dialog before deleting files/folders
- Users must confirm deletion with a warning message
- Works for both Delete key and right-click delete
- Shows different messages for single file, folder, or multiple items
- Includes permanent deletion warning
- Follows existing design patterns using confirmWithToast

* Adds confirmation for deletion of items including folders

Updates the file deletion confirmation logic to distinguish between
deleting multiple items with or without folders. Introduces a new
translation string for a clearer user prompt when folders and their
contents are included in the deletion.

Improves clarity and reduces user error when performing bulk deletions.

* feat: Add Chinese translations for delete confirmation messages

* Adds camelCase support for encrypted field mappings (#342)

Extends encrypted field mappings to include camelCase variants
to support consistency and compatibility with different naming
conventions. Updates reverse mappings for Drizzle ORM to allow
conversion between camelCase and snake_case field names.

Improves integration with systems using mixed naming styles.

* Run code cleanup, add sidebar persistence, fix OIDC credentials, force SSH password.

* Fix snake case mismatching

* Add real client IP

* Fix OIDC credential persistence issue

The issue was that OIDC users were getting a new random Data Encryption Key (DEK)
on every login, which made previously encrypted credentials inaccessible.

Changes:
- Modified setupOIDCUserEncryption() to persist the DEK encrypted with a system-derived key
- Updated authenticateOIDCUser() to properly retrieve and use the persisted DEK
- Ensured OIDC users now have the same encryption persistence as password-based users

This fix ensures that credentials created by OIDC users remain accessible across
multiple login sessions.

* Fix race condition and remove redundant kekSalt for OIDC users

Critical fixes:

1. Race Condition Mitigation:
   - Added read-after-write verification in setupOIDCUserEncryption()
   - Ensures session uses the DEK that's actually in the database
   - Prevents data loss when concurrent logins occur for new OIDC users
   - If race is detected, discards generated DEK and uses stored one

2. Remove Redundant kekSalt Logic:
   - Removed unnecessary kekSalt generation and checks for OIDC users
   - kekSalt is not used in OIDC key derivation (uses userId as salt)
   - Reduces database operations from 4 to 2 per authentication
   - Simplifies code and removes potential confusion

3. Improved Error Handling:
   - systemKey cleanup moved to finally block
   - Ensures sensitive key material is always cleared from memory

These changes ensure data consistency and prevent potential data loss
in high-concurrency scenarios.

* Cleanup OIDC pr and run prettier

---------

Co-authored-by: Ved Prakash <54140516+thorved@users.noreply.github.com>

* Fix typos and improve wording in README.md

Corrected grammar and punctuation in README.

* Image 7.png

* Rename 3gi3b3os5psf1.png to Image 7.png

* Add video demonstration to README

Added a video demonstration to the README.

* Delete repo-images/Image 7.png

* Add files via upload

* Delete repo-images/Image 7.png

* Add files via upload

* Initial German translation

* German translation (#281)

* German translation (#281)

* Implementation of German language support  (#281)

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/locales/de/translation.json

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: Karmaa <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Ved Prakash <54140516+thorved@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Feature disable password login (#378)

* Add admin toggle to disable password login

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/ui/main-axios.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/ui/Desktop/Admin/AdminSettings.tsx

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/database/routes/users.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add SSH TOTP authentication support (#350)

* Add SSH TOTP authentication support

- Implement keyboard-interactive authentication for SSH connections
- Add TOTP dialog component for Terminal and File Manager
- Handle TOTP prompts in WebSocket and HTTP connections
- Disable Server Stats for TOTP-enabled servers
- Add i18n support for TOTP-related messages

* Update src/backend/ssh/server-stats.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update src/backend/ssh/file-manager.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Karmaa <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add terminal snippets feature with sidebar UI (#377)

* Add terminal snippets feature with sidebar UI

- Add snippets CRUD API endpoints and database schema
- Implement snippets sidebar accessible from TopNavbar
- Add copy to clipboard functionality
- Include tooltips and optimized styling
- Add English and Chinese translations

* Update src/backend/database/routes/snippets.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Feature engineering improvements (#376)

* chore: add engineering improvements

- Configure Prettier with unified code style rules
- Add husky + lint-staged for automated pre-commit checks
- Add commitlint to enforce conventional commit messages
- Add PR check workflow for CI automation
- Auto-format all files with Prettier
- Fix TypeScript any types in field-crypto.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: enhance development environment

- Add .editorconfig for unified editor settings
- Add .nvmrc to specify Node.js version (20)
- Add useful npm scripts: format, format:check, lint, lint:fix, type-check

* chore: add IDE and Git configuration

- Add VS Code workspace settings for consistent development experience
- Add VS Code extension recommendations (ESLint, Prettier, EditorConfig)
- Add .gitattributes to enforce LF line endings

* refactor: clean up unused variables and empty blocks

- database.ts: Remove unused variables (authManager, format, HTTPS_PORT, etc.)
- database.ts: Fix empty catch blocks with descriptive comments
- database.ts: Add eslint-disable for required middleware parameter
- db/index.ts: Remove unused variables and fix empty catch blocks
- Temporarily remove ESLint from pre-commit to allow incremental fixes

Reduced total errors from 947 to 913 (34 fixes)

* refactor: clean up unused variables and empty blocks in routes

Routes updated:
- credentials.ts: Remove 12 unused variables/imports
- alerts.ts: Remove 1 unused variable
- users.ts: Remove 9 unused variables/imports

Changes:
- Remove unused imports (NextFunction, jwt, UserCrypto, detectKeyType)
- Fix empty catch blocks with descriptive comments
- Prefix reserved parameters with underscore
- Clean up unused error variables in catch blocks

Reduced errors from 913 to 886 (27 fixes)

* refactor: clean up unused variables in routes/ssh.ts

- Remove unused imports (NextFunction, jwt)
- Remove 6 unused variables (result, updateResult, name x3)
- All 8 no-unused-vars errors fixed

* refactor: clean up unused variables and empty blocks in file-manager.ts

- Remove 22 unused variables (linkCount, hostId, userId, content, escapedTempFile, index, code)
- Fix 1 empty catch block
- Simplify multiple route handlers by removing unused destructured parameters

Reduced errors from 878 to 855 (23 fixes)

* refactor: clean up unused variables and empty blocks in utils

database-migration.ts:
- Remove 3 unused variables (encryptedSize, totalOriginalRows, totalMemoryRows)

lazy-field-encryption.ts:
- Fix 6 empty catch blocks with descriptive comments
- Keep error variables where they are used in logging

tunnel.ts:
- Fix multiple empty catch blocks
- Remove empty else blocks
- Partially fixed (10/21 issues resolved)

Reduced errors from 855 to 833 (22 fixes)

* fix: restore error variable in catch block for logging

Fix TypeScript error where error variable was removed from catch block
but still used in logging statements. The error variable is needed for
proper error logging and re-throwing.

* fix: clean up tunnel.ts empty blocks and unused variables

移除了 tunnel.ts 中的空块和未使用的变量:
- 移除 2 个空 else 块
- 修复 2 个空 if 块并添加注释
- 修复空错误处理器并添加注释
- 将未使用的 err 参数重命名为 _err

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty blocks and unused variables in backend utils

修复了后端工具文件中的空块和未使用的变量:
- auth-manager.ts: 移除空 else 块
- system-crypto.ts: 修复空 catch 块并添加注释
- starter.ts: 修复空 catch 块并添加注释
- server-stats.ts: 将未使用的 reject 参数重命名为 _reject
- credentials.ts: 将 connectionTimeout 从 let 改为 const

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in frontend components

修复了前端组件中的空 catch 块:
- Tunnel.tsx: 修复空 catch 块并添加注释
- ServerConfig.tsx: 修复空 catch 块并添加注释
- TerminalKeyboard.tsx: 修复空 catch 块并添加注释
- system-crypto.ts: 修复遗漏的空 catch 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in backend utilities

修复了后端工具文件中的 10 个空 catch 块:
- system-crypto.ts: 修复 1 个空 catch 块
- server-stats.ts: 修复 4 个空 catch 块
- auto-ssl-setup.ts: 修复 1 个空 catch 块
- ssh-key-utils.ts: 修复 4 个空 catch 块

所有空块都添加了描述性注释说明为何忽略错误。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty catch blocks in UI hooks and components

修复了 5 个 UI 组件和 hooks 中的空 catch 块:
- useDragToSystemDesktop.ts: 修复 2 个空 catch 块
- HomepageAuth.tsx: 修复 1 个空 catch 块
- HostManagerEditor.tsx: 修复 2 个空 catch 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up empty blocks in file manager and credential editor

修复了 5 个空块:
- FileManagerGrid.tsx: 移除 1 个空 else 块和 1 个空 if 块
- CredentialEditor.tsx: 修复 1 个空 catch 块,移除 2 个空 if/else 块

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clean up all empty catch blocks in Terminal components

修复了 Terminal 组件中的所有 8 个空 catch 块:
- Desktop/Apps/Terminal/Terminal.tsx: 修复 5 个空 catch 块
- Mobile/Apps/Terminal/Terminal.tsx: 修复 3 个空 catch 块

所有空块都添加了描述性注释。这是空块修复的最后一批。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: remove useless try/catch wrappers

移除了 3 个无用的 try/catch 包装器:
- users.ts: 移除只重新抛出错误的外层 try/catch
- FileManager.tsx: 移除只重新抛出错误的内层 try/catch
- DiffViewer.tsx: 移除只重新抛出错误的内层 try/catch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: remove unused imports and mark unused parameters

移除了未使用的导入和标记未使用的参数:
- auto-ssl-setup.ts: 移除未使用的 crypto 导入
- user-crypto.ts: 移除未使用的 users 导入
- user-data-import.ts: 移除未使用的 nanoid 导入
- simple-db-ops.ts: 标记未使用的 userId 和 tableName 参数

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unnecessary escape characters in regex patterns

移除了正则表达式中不必要的转义字符:
- users.ts: 修复 5 个 \/ 不必要的转义
- TabContext.tsx: 修复 1 个 \/ 不必要的转义

在字符串形式的正则表达式中,/ 不需要转义。

---------

Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>

* feat: enhance server stats widgets and fix TypeScript/ESLint errors (#394)

* feat: add draggable server stats dashboard with customizable widgets

* fix: widget deletion and layout persistence issues

* fix: improve widget deletion UX and add debug logs for persistence

* fix: resolve widget deletion and layout persistence issues

- Add drag handles to widget title bars for precise drag control
- Prevent delete button from triggering drag via event stopPropagation
- Include statsConfig field in all GET/PUT API responses
- Remove debug console logs from production code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: complete statsConfig field support across all API routes

- Add statsConfig to POST /db/host (create) route
- Add statsConfig to all GET routes for consistent API responses
- Remove incorrect statsConfig schema from HostManagerEditor
- statsConfig is now only managed by Server page layout editor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: add statsConfig to metrics API response

- Add statsConfig field to SSHHostWithCredentials interface
- Include statsConfig in resolveHostCredentials baseHost object
- Ensures /metrics/:id API returns complete host configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: include statsConfig in SSH host create/update requests

The statsConfig field was being dropped by createSSHHost and updateSSHHost
functions in main-axios.ts, preventing layout customization from persisting.

Fixed by adding statsConfig to the submitData object in both functions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: refactor server stats widgets into modular structure

Created dedicated widgets directory with individual components:
- CpuWidget, MemoryWidget, DiskWidget as separate components
- Widget registry for centralized widget configuration
- AddWidgetDialog for user-friendly widget selection
- Updated Server.tsx to use modular widget system

Benefits:
- Better code organization and maintainability
- Easier to add new widget types in the future
- Centralized widget metadata and configuration
- User can now add widgets via dialog interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: exit edit mode after saving layout

* feat: add customizable widget sizes with chart visualizations

Add three-tier size system (small/medium/large) for server stats widgets.
Integrate recharts library for visualizing trends in large widgets with
line charts (CPU), area charts (Memory), and radial bar charts (Disk).
Fix layout overflow issues with proper flexbox patterns.

* refactor: simplify server stats widget system

Replaced complex drag-and-drop grid layout with simple checkbox-based
configuration and static responsive grid display.

- Removed react-grid-layout dependency and 6 related packages
- Simplified StatsConfig from complex Widget objects to simple array
- Added Statistics tab in HostManagerEditor for checkbox selection
- Refactored Server.tsx to use CSS Grid instead of ResponsiveGridLayout
- Simplified widget components by removing edit mode and size selection
- Deleted unused AddWidgetDialog and registry files
- Fixed statsConfig serialization in backend routes

Net result: -787 lines of code, cleaner architecture.

* feat: add system, uptime, network and processes widgets

Add four new server statistics widgets:
- SystemWidget: displays hostname, OS, and kernel information
- UptimeWidget: shows server total uptime with formatted display
- NetworkWidget: lists network interfaces with IP and status
- ProcessesWidget: displays top processes by CPU usage

Backend changes:
- Extended SSH metrics collection to gather network, uptime, process, and system data
- Added commands to parse /proc/uptime, ip addr, ps aux output

Frontend changes:
- Created 4 new widget components with consistent styling
- Updated widget type definitions and HostManagerEditor
- Unified all widget heights to 280px for consistent layout
- Added translations for all new widgets (EN/ZH)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: improve widget styling and UX consistency

Enhance all server stats widgets with improved styling and user experience:

Widget improvements:
- Fix hardcoded titles, now use i18n translations for all widgets
- Improve data formatting with consistent translation keys
- Enhance empty state displays with better visual hierarchy
- Add smooth hover transitions and visual feedback
- Standardize spacing and layout patterns across widgets

Specific optimizations:
- CPU: Use translated load average display
- Memory: Translate "Free" label
- Disk: Translate "Available" label
- System: Improve icon colors and spacing consistency
- Network: Better empty state, enhanced card styling
- Processes: Improved card borders and spacing

Visual polish:
- Unified icon sizing and opacity for empty states
- Consistent border radius (rounded-lg)
- Better hover states with subtle transitions
- Enhanced font weights for improved readability

* fix: replace explicit any types with proper TypeScript types

- Replace 'any' with 'unknown' in catch blocks and add type assertions
- Create explicit interfaces for complex objects (HostConfig, TabData, TerminalHandle)
- Fix window/document object type extensions
- Update Electron API type definitions
- Improve type safety in database routes and utilities
- Add proper types to Terminal components (Desktop & Mobile)
- Fix navigation component types (TopNavbar, LeftSidebar, AppView)

Reduces TypeScript lint errors from 394 to 358 (-36 errors)
Fixes 45 @typescript-eslint/no-explicit-any violations

* fix: replace explicit any types with proper TypeScript types

- Create explicit interfaces for Request extensions (AuthenticatedRequest, RequestWithHeaders)
- Add type definitions for WebSocket messages and SSH connection data
- Use generic types in DataCrypto methods instead of any return types
- Define proper interfaces for file manager data structures
- Replace catch block any types with unknown and proper type assertions
- Add HostConfig and TabData interfaces for Server component

Fixes 32 @typescript-eslint/no-explicit-any violations across 5 files

* fix: resolve 6 TypeScript compilation errors

Fixed field name mismatches and generic type issues:
- database.ts: Changed camelCase to snake_case for key_password, private_key, public_key fields
- simple-db-ops.ts: Added explicit generic type parameters to DataCrypto method calls

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve unused variables in backend utils

Fixed @typescript-eslint/no-unused-vars errors in:
- starter.ts: removed unused error variables (2 fixes)
- auto-ssl-setup.ts: removed unused error variable (1 fix)
- ssh-key-utils.ts: removed unused error variables (3 fixes)
- user-crypto.ts: removed unused error variables (5 fixes)
- data-crypto.ts: removed unused plaintextFields and error variables (2 fixes)
- simple-db-ops.ts: removed unused parameters _userId and _tableName (2 fixes)

Total: 15 unused variable errors fixed

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variable in terminal.ts

Fixed @typescript-eslint/no-unused-vars errors:
- Removed unused userPayload variable (line 123)
- Removed unused cols and rows from destructuring (line 348)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve unused variables in server-stats.ts

Fixed @typescript-eslint/no-unused-vars errors:
- Removed unused _reject parameter in Promise (line 64)
- Removed shadowed now variable in pollStatusesOnce (line 1130)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variables in tunnel.ts

Removed 5 unused variables:
- Removed unused data parameter from stdout event handler
- Removed hasSourcePassword, hasSourceKey, hasEndpointPassword, hasEndpointKey variables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unused variables in main-axios.ts

Removed 8 unused variables:
- Removed unused type imports (Credential, CredentialData, HostInfo, ApiResponse)
- Removed unused apiPort variable
- Removed unused error variables in 3 catch blocks

* fix: remove unused variables in terminal.ts and starter.ts

Removed 2 unused variables:
- Removed unused JWTPayload type import from terminal.ts
- Removed unused _promise parameter from starter.ts

* fix: remove unused variables in sidebar.tsx

Removed 9 unused variables:
- Removed 5 unused Sheet component imports
- Removed unused SIDEBAR_WIDTH_MOBILE constant
- Removed 3 unused variables from useSidebar destructuring

* fix: remove 13 unused variables in frontend files

- version-check-modal.tsx: removed 4 unused imports and functions
- main.tsx: removed unused isMobile state
- AdminSettings.tsx: removed 8 unused imports and error variables

* fix: remove 28 unused variables across frontend components

Cleaned up unused imports, state variables, and function parameters:
- CredentialsManager.tsx: removed 8 unused variables (Sheet/Select imports)
- FileManager.tsx: removed 10 unused variables (icons, state, functions)
- Terminal.tsx (Desktop): removed 5 unused variables (state, handlers)
- Terminal.tsx (Mobile): removed 5 unused variables (imports, state)

Reduced lint errors from 271 to 236 (35 errors fixed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 10 unused variables in File Manager and config files

Cleaned up more unused imports, parameters, and variables:
- FileManagerGrid.tsx: removed 4 unused variables (params, function)
- FileManagerContextMenu.tsx: removed Share import
- FileManagerSidebar.tsx: removed onLoadDirectory parameter
- DraggableWindow.tsx: removed Square import
- FileWindow.tsx: removed updateWindow variable
- ServerConfig.tsx: removed 2 unused error parameters

Reduced lint errors from 236 to 222 (14 errors fixed total)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove 7 unused variables in widgets and Homepage components

Cleaned up unused imports, p…

* fix: Improve TOTP reliability, move components around, turn homepage update log into a sheet

* fix: Work more on TOTP, renamed homepage to dashboard and began improvements

* fix: test commit

* fix: Fix server stats login

* feat: Complete layout of Termix dashboard

* feat: Update font for reacent activity

* feat: Connect dashboard to backend and update tab system to be similar to a browser (neither are fully finished)

* feat: Improve dashboard API, improve tab system, various other fixes

* fix: Resize dashboard boxes and reduce server stats size to add scrolling

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix: Improve macOS support

* fix(auth): Fix admin user authentication for /users/db-health endpoint by adding cookie JWT support (#422)

Fixed authentication issue for admin users accessing the /users/db-health endpoint:

- Added JWT token extraction from cookies (req.cookies?.jwt)
- Added support for Bearer token from Authorization header
- Improved error handling for missing and invalid tokens
- Ensured consistent authentication flow for admin users

Changes made:
- Check for JWT token in req.cookies?.jwt
- Support Bearer token from Authorization header
- Return 401 error when token is missing
- Return 401 error when token validation fails

Fixes: https://github.com/Termix-SSH/Support/issues/12

* Update Docker login credentials and image names

* Update docker-image.yml

* Refactor Docker image workflow for registry options

Updated workflow to allow selection of Docker registry and simplified tag handling.

* Update Docker login conditions and tag handling

* Enhance Docker image workflow with better tagging

Updated Docker image workflow to improve tag handling and descriptions.

* Update Docker workflow for tag handling and cleanup

* Update docker-image.yml

* Update Docker workflow inputs and tag logic

Refactor Docker workflow to include version and production inputs, and streamline tag determination.

* Update Docker image workflow for multi-platform builds

* Refactor Docker image tags for clarity

Updated Docker image tags to use multi-line syntax for better readability and added latest tag conditionally.

* Fix typo in exposed ports in Dockerfile

* Update docker-image.yml

* Refactor Docker image workflow for registry handling

Removed registry input and adjusted Docker Hub login condition.

* Handle OIDC users during database import (#424)

* Update Docker image name for GitHub registry

* Fix image name casing in Docker workflow

* Remove untagged image cleanup step from workflow

Removed the step to delete untagged image versions from the workflow.

* Change Docker login to use GHCR credentials

Updated Docker login credentials for GitHub Container Registry.

* Remove cache moving step from Docker workflow

Removed the step to move the build cache in the Docker workflow.

* Refactor Docker image workflow for versioning and builds

* Update docker-image.yml

* Allow OIDC users to import database without password

* Skip import password prompt for OIDC users

* docs: clarify OIDC import unlocking flow

* docs: explain admin import password logic

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: Nikola Novoselec <nikolanovoselec@users.noreply.github.com>

* fix: Fixed various issues with the dashboard, tab bar, and database issues

* feat: Added none password option and fixed some navbar issues (still present)

* fix: Fix tab reload/state loss whenever moving them to the rigbht

* feat: Make tabs auto expand and contract and scroll

* fix: Remove vertical scrolling in the tab bar and dashboard and reduce scrollbar height in tab bar

* feat: Add many terminal customizations

* feat: Add many terminal customizations

* fix: incorrect macOS logo, termix hangs on macOS, and macOS reporting incorrect version

* fix: fix macOS verison build error

* fix: fix macOS build error

* fix: replaced macOS icon

* feat: Added more output types for electron and streamlined the workflow

* fix: Rollup package issue

* fix: Rollup package issue for macOS

* feat: fix macOS/Linux build error

* fix: fix macOS build error

* fix: fix macOS build error and double folder issues

* fix: fix macOS build error

* fix: fix macOS build error

* fix: fix macOS build error

* fix: fix macOS build error

* fix: files uploading as folders instead of raw executable

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing

* fix: macOS build failing and update workflow options

* fix: macOS build failing

* fix: Upload to release not finding a release

* fix: ChaNge platform for upload to release

* fix: Standardize file naming

* fix: Build error with custom tar.gz naming

* fix: Allow .dmg signing

* fix: Fix .dmg signing

* fix: Fix notarize build error

* fix: Fix notarize build error

* fix: Add app specific password

* fix: add developer ID certificate

* fix: macOS app not closing

* fix: cache error

* Add Brazilian Portuguese translation (#425)

* Update Docker image name for GitHub registry

* Fix image name casing in Docker workflow

* Remove untagged image cleanup step from workflow

Removed the step to delete untagged image versions from the workflow.

* Change Docker login to use GHCR credentials

Updated Docker login credentials for GitHub Container Registry.

* Remove cache moving step from Docker workflow

Removed the step to move the build cache in the Docker workflow.

* Refactor Docker image workflow for versioning and builds

* Update docker-image.yml

* Add Brazilian Portuguese translation

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>

* feat: add chocolatey support

* feat: add initial flatpak/homebrew support

* fix: incorrect choco URL

* fix: rename choco package

* fix: updated package lock

* fix: move totp dialog

* feat: centralize SSH tools and allow multi terminal snippets

* fix: Squash commit of several fixes and features for many different elements

* fix: Fix some translations

* fix: pt-BR build error

* fix: npm build error

* fix: npm build error

* feat: rename gh actions

* fix: None auth and Host.tsx edit button issues

* fix: macOS dmg fail

* fix: linux not building x64

* fix: linux not uploading x64

* fix: Password reset issues, ODIC admin auth not filling, and electron x64 build issues

* feat: Squashed commit of fixing "none" authentication and adding a sessions system for mobile, electron, and web

* fix: Replace checkbox in docker build with dropdown

* fix: Issue with electron not displaying site

* fix: Issue with electron not displaying se

* fix: Issue with electron not displaying

* fix: Mobile reporting wrong user-agent

* fix: Nginx runtime error

* fix: JWT not persisting after reboot

* feat: add null to gitnore

* feat: remove sessions after reboot

* fix: File cleanup

* fix: Uncapitalize folder titles and finalize file cleanup

* fix: Build errors after cleanup

* fix: GITHUB_TOKEN issue in electron build

* fix: Random macOS build error

* fix: macOS GH token error

* fix: Incorrect desktop user agent and build issues

* fix: Notarize cleanup

* fix: None auth issues and macOS build failure and rename files for consistency

* fix: Run prettier

* feat: Update readme for iPadOS

* fix: Electron desktop not logging in

* fix: Electron desktop not logging in

* fix: Duplicated CORS headers

* fix: Electron login issues

* fix: Sqlite package fix

* fix: Desktop app login issues and rename version check and host manager folder

* fix: Electron HTTP fix + stripped background fix

* fix: Electron security issues and TOTP/None auth issues

* fix: Server config showing in web view

* fix: Update readme

* fix: Update readme

* [FEATURE] Adjustable Left Menu Width in Web Interface (#427)

#234
Added to LeftSidebar.tsx functionality
Update TopNavbar.tsx to use sidebar dynamic width

Co-authored-by: Robert Coroianu <robert.coroianu@easydo.co>

* fix: Sidebar resize issues and issues with TOTP interfering with password auth

* chore: Run prettier

* fix: Tunnels being same name

* fix: Electron build problems

* fix: Type error

* fix: Linux app image and server conifg issue

* fix: Run linter

* fix: Incorrect android user agent

* fix: No x64 appimage and server config displaying in electron webview

* fix: Electron API and terminal websocket issues

* fix: Android user agent edgecase and electron using web view incorrectly

* feat: Added mobile and electron UI redirecting system

* fix: Fix electron login and mobile redirect

* feat: add Russian translation and readme (#428)

* Update Docker image name for GitHub registry

* Fix image name casing in Docker workflow

* Remove untagged image cleanup step from workflow

Removed the step to delete untagged image versions from the workflow.

* Change Docker login to use GHCR credentials

Updated Docker login credentials for GitHub Container Registry.

* Remove cache moving step from Docker workflow

Removed the step to move the build cache in the Docker workflow.

* Refactor Docker image workflow for versioning and builds

* Update docker-image.yml

* Update print statement from 'Hello' to 'Goodbye'

* Update docker build

* Rename docker-image.yml to docker.yml

* Rename electron-build.yml to electron.yml

* feat: add Russian translation and readme

* feat: Added mobile and electron UI redirecting system

* fix: Fix electron login and mobile redirect

* Update Docker image name for GitHub registry

* Fix image name casing in Docker workflow

* Remove untagged image cleanup step from workflow

Removed the step to delete untagged image versions from the workflow.

* Change Docker login to use GHCR credentials

Updated Docker login credentials for GitHub Container Registry.

* Remove cache moving step from Docker workflow

Removed the step to move the build cache in the Docker workflow.

* Refactor Docker image workflow for versioning and builds

* Update docker-image.yml

* Update print statement from 'Hello' to 'Goodbye'

* Update docker build

* Rename docker-image.yml to docker.yml

* Rename electron-build.yml to electron.yml

* feat: add Russian translation and readme

* fix: Add russian

---------

Co-authored-by: Luke Gustafson <88517757+LukeGus@users.noreply.github.com>
Co-authored-by: root <root@codeserver.192.168.0.5>
Co-authored-by: LukeGus <bugattiguy527@gmail.com>

* fix: remove russian readme

* fix: Revert workflows back to normal

* fix: Session invoking all sessions and mobile success redirect not displaying

* fix: Logging out on one device logs out all on same user

* fix: Improve session clearing (possible RC)

* fix: Linux portable naming incorrect

* fix: Linux desktop not opening

* fix: Linux build failure

* fix: Linux build failure

* fix: Linux build failure

* fix: Linux build failure

* fix: Linux sandbox issue

* fix: Linux sandbox issue

* fix: Linux sandbox issue

* fix: Finalize electron

* fix: Database check failure (release cantidate)

* fix: Run cleanup and final fix for electron

---------

Co-authored-by: Ved Prakash <54140516+thorved@users.noreply.github.com>
Co-authored-by: P3RF3CTION <herzmaximilian@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: ZacharyZcR <zacharyzcr1984@gmail.com>
Co-authored-by: ZacharyZcR <2903735704@qq.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: blacksmith-sh[bot] <157653362+blacksmith-sh[bot]@users.noreply.github.com>
Co-authored-by: suraimu-team <team@suraimu.com>
Co-authored-by: Nikola Novoselec <12149536+nikolanovoselec@users.noreply.github.com>
Co-authored-by: Nikola Novoselec <nikolanovoselec@users.noreply.github.com>
Co-authored-by: xhemp <13650956+xhemp@users.noreply.github.com>
Co-authored-by: Robert Coroianu <robert.coroianu@gmail.com>
Co-authored-by: Robert Coroianu <robert.coroianu@easydo.co>
Co-authored-by: shizaterrorblade <shizaterrorblayde@gmail.com>
Co-authored-by: root <root@codeserver.192.168.0.5>
2025-11-05 10:36:16 -06:00
Luke Gustafson
dc29646a39 Rename electron-build.yml to electron.yml 2025-10-29 19:20:38 -05:00
Luke Gustafson
41add20e0a Rename docker-image.yml to docker.yml 2025-10-29 19:20:15 -05:00
Luke Gustafson
df19569313 Update docker build 2025-10-29 19:19:47 -05:00
Luke Gustafson
b0e49ffb4f Update print statement from 'Hello' to 'Goodbye' 2025-10-29 19:19:02 -05:00
Luke Gustafson
40ac75de81 Update docker-image.yml 2025-10-20 17:14:19 -05:00
Luke Gustafson
ad1864f062 Refactor Docker image workflow for versioning and builds 2025-10-20 16:22:11 -05:00
Luke Gustafson
300e0a263f Remove cache moving step from Docker workflow
Removed the step to move the build cache in the Docker workflow.
2025-10-20 15:35:52 -05:00
Luke Gustafson
9dd79929e8 Change Docker login to use GHCR credentials
Updated Docker login credentials for GitHub Container Registry.
2025-10-20 15:22:41 -05:00
Luke Gustafson
8c867d3b16 Remove untagged image cleanup step from workflow
Removed the step to delete untagged image versions from the workflow.
2025-10-20 15:13:40 -05:00
Luke Gustafson
2450ae732e Fix image name casing in Docker workflow 2025-10-20 13:04:59 -05:00
Luke Gustafson
513a88826d Update Docker image name for GitHub registry 2025-10-20 12:59:37 -05:00
blacksmith-sh[bot]
6dca33efba Migrate workflows to Blacksmith (#421)
Co-authored-by: blacksmith-sh[bot] <157653362+blacksmith-sh[bot]@users.noreply.github.com>
2025-10-13 11:55:39 -05:00
LukeGus
a4873e96bf Update links to new orgnanization 2025-10-12 01:33:30 -05:00
Karmaa
d12fab425d Update repository links and badge URLs in README 2025-10-12 00:45:34 -05:00
Karmaa
e49ee1fe82 Revise support instructions in README.md
Updated support section with new issue reporting instructions and clarified Discord support response times.
2025-10-12 00:25:06 -05:00
Karmaa
e7eb0b0597 Add issue template configuration for support links 2025-10-12 00:23:52 -05:00
Karmaa
4e736791fa Delete .github/ISSUE_TEMPLATE/feature_request.yml 2025-10-12 00:22:52 -05:00
Karmaa
f0b35c8cfe Delete .github/ISSUE_TEMPLATE/bug_report.yml 2025-10-12 00:22:45 -05:00
LukeGus
d50ed7fa70 Update package version 2025-10-09 00:20:37 -05:00
312 changed files with 119196 additions and 19667 deletions

21
.commitlintrc.json Normal file
View File

@@ -0,0 +1,21 @@
{
"extends": ["@commitlint/config-conventional"],
"rules": {
"type-enum": [
2,
"always",
[
"feat",
"fix",
"docs",
"style",
"refactor",
"perf",
"test",
"chore",
"revert"
]
],
"subject-case": [0]
}
}

View File

@@ -1,29 +1,24 @@
# Dependencies
node_modules
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Build outputs
dist
build
.next
.nuxt
# Development files
.env.local
.env.development.local
.env.test.local
.env.production.local
# IDE and editor files
.vscode
.idea
*.swp
*.swo
*~
# OS generated files
.DS_Store
.DS_Store?
._*
@@ -32,98 +27,67 @@ build
ehthumbs.db
Thumbs.db
# Git
.git
.gitignore
# Documentation
README.md
README-CN.md
CONTRIBUTING.md
LICENSE
# Docker files (avoid copying docker files into docker)
# docker/ - commented out to allow entrypoint.sh to be copied
# Repository images
repo-images/
# Uploads directory
uploads/
# Electron files (not needed for Docker)
electron/
electron-builder.json
# Development and build artifacts
*.log
*.tmp
*.temp
# Font files (we'll optimize these in Dockerfile)
# public/fonts/*.ttf
# Logs
logs
*.log
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache
# next.js build output
.next
# nuxt.js build output
.nuxt
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# TernJS port file
.tern-port
.tern-port

14
.editorconfig Normal file
View File

@@ -0,0 +1,14 @@
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[*.{js,jsx,ts,tsx,json,css,scss,md,yml,yaml}]
indent_style = space
indent_size = 2
[*.md]
trim_trailing_whitespace = false

31
.gitattributes vendored Normal file
View File

@@ -0,0 +1,31 @@
* text=auto eol=lf
*.js text eol=lf
*.jsx text eol=lf
*.ts text eol=lf
*.tsx text eol=lf
*.json text eol=lf
*.css text eol=lf
*.scss text eol=lf
*.html text eol=lf
*.md text eol=lf
*.yaml text eol=lf
*.yml text eol=lf
*.sh text eol=lf
*.bash text eol=lf
*.bat text eol=crlf
*.cmd text eol=crlf
*.ps1 text eol=crlf
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
*.svg binary
*.woff binary
*.woff2 binary
*.ttf binary
*.eot binary

View File

@@ -1,82 +0,0 @@
name: Bug report
description: Create a report to help Termix improve
title: "[BUG]"
labels: [bug]
assignees: []
body:
- type: input
id: title
attributes:
label: Title
description: Brief, descriptive title for the bug
placeholder: "Brief description of the bug"
validations:
required: true
- type: dropdown
id: platform
attributes:
label: Platform
description: How are you using Termix?
options:
- Website - Firefox
- Website - Safari
- Website - Chrome
- Website - Other Browser
- App - Windows
- App - Linux
- App - iOS
- App - Android
validations:
required: true
- type: dropdown
id: server-installation-method
attributes:
label: Server Installation Method
description: How is the Termix server installed?
options:
- Docker
- Manual Build
validations:
required: true
- type: input
id: version
attributes:
label: Version
description: Find your version in the User Profile tab
placeholder: "e.g., 1.7.0"
validations:
required: true
- type: checkboxes
id: troubleshooting
attributes:
label: Troubleshooting
description: Please check all that apply
options:
- label: I have examined logs and tried to find the issue
- label: I have reviewed opened and closed issues
- label: I have tried restarting the application
- type: textarea
id: problem-description
attributes:
label: The Problem
description: Describe the bug in detail. Include as much information as possible with screenshots if applicable.
placeholder: "Describe what went wrong..."
validations:
required: true
- type: textarea
id: reproduction-steps
attributes:
label: How to Reproduce
description: Use as few steps as possible to reproduce the issue
placeholder: |
1.
2.
3.
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Any other context about the problem
placeholder: "Add any other context about the problem here..."

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Support Center
url: https://github.com/Termix-SSH/Support/issues
about: Report any feature requests or bugs in the support center
- name: Discord
url: https://discord.gg/jVQGdvHDrf
about: Official Termix Discord server for general discussion and quick support

View File

@@ -1,36 +0,0 @@
name: Feature request
description: Suggest an idea for Termix
title: "[FEATURE]"
labels: [enhancement]
assignees: []
body:
- type: input
id: title
attributes:
label: Title
description: Brief, descriptive title for the feature request
placeholder: "Brief description of the feature"
validations:
required: true
- type: textarea
id: related-issue
attributes:
label: Is it related to an issue?
description: Describe the problem this feature would solve
placeholder: "Describe what problem this feature would solve..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: The Solution
description: Describe your proposed solution in detail
placeholder: "Describe how you envision this feature working..."
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Any other context or screenshots about the feature request
placeholder: "Add any other context about the feature request here..."

View File

@@ -28,4 +28,4 @@ _(Optional: add before/after screenshots, GIFs, or console output)_
- [ ] Code follows project style guidelines
- [ ] Supports mobile and desktop UI/app (if applicable)
- [ ] I have read [Contributing.md](https://github.com/LukeGus/Termix/blob/main/CONTRIBUTING.md)
- [ ] I have read [Contributing.md](https://github.com/Termix-SSH/Termix/blob/main/CONTRIBUTING.md)

View File

@@ -1,137 +0,0 @@
name: Build and Push Docker Image
on:
workflow_dispatch:
inputs:
tag_name:
description: "Custom tag name for the Docker image"
required: false
default: ""
registry:
description: "Docker registry to push to"
required: true
default: "ghcr"
type: choice
options:
- "ghcr"
- "dockerhub"
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
fetch-depth: 1
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: arm64
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
platforms: linux/amd64,linux/arm64
driver-opts: |
image=moby/buildkit:master
network=host
- name: Cache npm dependencies
uses: actions/cache@v4
with:
path: |
~/.npm
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.ref_name }}-${{ hashFiles('docker/Dockerfile') }}
restore-keys: |
${{ runner.os }}-buildx-${{ github.ref_name }}-
${{ runner.os }}-buildx-
- name: Login to GitHub Container Registry
if: github.event.inputs.registry != 'dockerhub'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event.inputs.registry == 'dockerhub'
uses: docker/login-action@v3
with:
username: bugattiguy527
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Determine Docker image tag
run: |
REPO_OWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
echo "REPO_OWNER=$REPO_OWNER" >> $GITHUB_ENV
if [ "${{ github.event.inputs.tag_name }}" != "" ]; then
IMAGE_TAG="${{ github.event.inputs.tag_name }}"
elif [ "${{ github.ref }}" == "refs/heads/main" ]; then
IMAGE_TAG="latest"
elif [ "${{ github.ref }}" == "refs/heads/development" ]; then
IMAGE_TAG="development-latest"
else
IMAGE_TAG="${{ github.ref_name }}"
fi
echo "IMAGE_TAG=$IMAGE_TAG" >> $GITHUB_ENV
# Determine registry and image name
if [ "${{ github.event.inputs.registry }}" == "dockerhub" ]; then
echo "REGISTRY=docker.io" >> $GITHUB_ENV
echo "IMAGE_NAME=bugattiguy527/termix" >> $GITHUB_ENV
else
echo "REGISTRY=ghcr.io" >> $GITHUB_ENV
echo "IMAGE_NAME=$REPO_OWNER/termix" >> $GITHUB_ENV
fi
- name: Build and Push Multi-Arch Docker Image
uses: docker/build-push-action@v6
with:
context: .
file: ./docker/Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
build-args: |
BUILDKIT_INLINE_CACHE=1
BUILDKIT_CONTEXT_KEEP_GIT_DIR=1
outputs: type=registry,compression=zstd,compression-level=19
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Delete all untagged image versions
if: success() && github.event.inputs.registry != 'dockerhub'
uses: quartx-analytics/ghcr-cleaner@v1
with:
owner-type: user
token: ${{ secrets.GHCR_TOKEN }}
repository-owner: ${{ github.repository_owner }}
delete-untagged: true
- name: Cleanup Docker Images Locally
if: always()
run: |
docker image prune -af
docker system prune -af --volumes

94
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,94 @@
name: Build and Push Docker Image
on:
workflow_dispatch:
inputs:
version:
description: "Version to build (e.g., 1.8.0)"
required: true
build_type:
description: "Build type"
required: true
default: "Development"
type: choice
options:
- Development
- Production
jobs:
build:
runs-on: blacksmith-4vcpu-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
fetch-depth: 1
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: linux/amd64,linux/arm64,linux/arm/v7
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Determine tags
id: tags
run: |
VERSION=${{ github.event.inputs.version }}
BUILD_TYPE=${{ github.event.inputs.build_type }}
TAGS=()
ALL_TAGS=()
if [ "$BUILD_TYPE" = "Production" ]; then
TAGS+=("release-$VERSION" "latest")
for tag in "${TAGS[@]}"; do
ALL_TAGS+=("ghcr.io/lukegus/termix:$tag")
ALL_TAGS+=("docker.io/bugattiguy527/termix:$tag")
done
else
TAGS+=("dev-$VERSION")
for tag in "${TAGS[@]}"; do
ALL_TAGS+=("ghcr.io/lukegus/termix:$tag")
done
fi
echo "ALL_TAGS=$(IFS=,; echo "${ALL_TAGS[*]}")" >> $GITHUB_ENV
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: lukegus
password: ${{ secrets.GHCR_TOKEN }}
- name: Login to Docker Hub (prod only)
if: ${{ github.event.inputs.build_type == 'Production' }}
uses: docker/login-action@v3
with:
username: bugattiguy527
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push multi-arch image
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/Dockerfile
push: true
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: ${{ env.ALL_TAGS }}
build-args: |
BUILDKIT_INLINE_CACHE=1
BUILDKIT_CONTEXT_KEEP_GIT_DIR=1
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ github.run_id }}
outputs: type=registry,compression=gzip,compression-level=9
- name: Cleanup Docker
if: always()
run: |
docker image prune -af
docker system prune -af --volumes

View File

@@ -1,93 +0,0 @@
name: Build Electron App
on:
workflow_dispatch:
inputs:
build_type:
description: "Build type to run"
required: true
default: "all"
type: choice
options:
- all
- windows
- linux
jobs:
build-windows:
runs-on: windows-latest
if: github.event.inputs.build_type == 'all' || github.event.inputs.build_type == 'windows' || github.event.inputs.build_type == ''
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
fetch-depth: 1
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Build Windows Portable
run: npm run build:win-portable
- name: Build Windows Installer
run: npm run build:win-installer
- name: Create Windows Portable zip
run: |
Compress-Archive -Path "release/win-unpacked/*" -DestinationPath "Termix-Windows-Portable.zip"
- name: Upload Windows Portable Artifact
uses: actions/upload-artifact@v4
with:
name: Termix-Windows-Portable
path: Termix-Windows-Portable.zip
retention-days: 30
- name: Upload Windows Installer Artifact
uses: actions/upload-artifact@v4
with:
name: Termix-Windows-Installer
path: release/*.exe
retention-days: 30
build-linux:
runs-on: ubuntu-latest
if: github.event.inputs.build_type == 'all' || github.event.inputs.build_type == 'linux' || github.event.inputs.build_type == ''
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
fetch-depth: 1
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Build Linux Portable
run: npm run build:linux-portable
- name: Create Linux Portable zip
run: |
cd release/linux-unpacked
zip -r ../../Termix-Linux-Portable.zip *
cd ../..
- name: Upload Linux Portable Artifact
uses: actions/upload-artifact@v4
with:
name: Termix-Linux-Portable
path: Termix-Linux-Portable.zip
retention-days: 30

1007
.github/workflows/electron.yml vendored Normal file

File diff suppressed because it is too large Load Diff

35
.github/workflows/pr-check.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: PR Check
on:
pull_request:
branches: [main, dev-*]
jobs:
lint-and-build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
run: |
rm -rf node_modules package-lock.json
npm install
- name: Run ESLint
run: npx eslint .
- name: Run Prettier check
run: npx prettier --check .
- name: Type check
run: npx tsc --noEmit
- name: Build
run: npm run build

437
.github/workflows/translate.yml vendored Normal file
View File

@@ -0,0 +1,437 @@
name: Auto Translate
on:
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
translate-zh:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t zh --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-zh
path: src/locales/zh.json
continue-on-error: true
translate-ru:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t ru --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-ru
path: src/locales/ru.json
continue-on-error: true
translate-pt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t pt --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-pt
path: src/locales/pt.json
continue-on-error: true
translate-fr:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t fr --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-fr
path: src/locales/fr.json
continue-on-error: true
translate-es:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t es --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-es
path: src/locales/es.json
continue-on-error: true
translate-de:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t de --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-de
path: src/locales/de.json
continue-on-error: true
translate-hi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t hi --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-hi
path: src/locales/hi.json
continue-on-error: true
translate-bn:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t bn --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-bn
path: src/locales/bn.json
continue-on-error: true
translate-ja:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t ja --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-ja
path: src/locales/ja.json
continue-on-error: true
translate-vi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t vi --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-vi
path: src/locales/vi.json
continue-on-error: true
translate-tr:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t tr --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-tr
path: src/locales/tr.json
continue-on-error: true
translate-ko:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t ko --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-ko
path: src/locales/ko.json
continue-on-error: true
translate-it:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t it --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-it
path: src/locales/it.json
continue-on-error: true
translate-he:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t he --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-he
path: src/locales/he.json
continue-on-error: true
translate-ar:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t ar --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-ar
path: src/locales/ar.json
continue-on-error: true
translate-pl:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t pl --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-pl
path: src/locales/pl.json
continue-on-error: true
translate-nl:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t nl --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-nl
path: src/locales/nl.json
continue-on-error: true
translate-sv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t sv --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-sv
path: src/locales/sv.json
continue-on-error: true
translate-id:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t id --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-id
path: src/locales/id.json
continue-on-error: true
translate-th:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t th --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-th
path: src/locales/th.json
continue-on-error: true
translate-uk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t uk --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-uk
path: src/locales/uk.json
continue-on-error: true
translate-cs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t cs --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-cs
path: src/locales/cs.json
continue-on-error: true
translate-ro:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t ro --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-ro
path: src/locales/ro.json
continue-on-error: true
translate-el:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t el --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-el
path: src/locales/el.json
continue-on-error: true
translate-nb:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npx i18n-auto-translation -k ${{ secrets.GOOGLE_TRANSLATE_API_KEY }} -d "src/locales" -f en -t nb --maxLinesPerRequest 1
- uses: actions/upload-artifact@v4
with:
name: translations-nb
path: src/locales/nb.json
continue-on-error: true
create-pr:
needs:
[
translate-zh,
translate-ru,
translate-pt,
translate-fr,
translate-es,
translate-de,
translate-hi,
translate-bn,
translate-ja,
translate-vi,
translate-tr,
translate-ko,
translate-it,
translate-he,
translate-ar,
translate-pl,
translate-nl,
translate-sv,
translate-id,
translate-th,
translate-uk,
translate-cs,
translate-ro,
translate-el,
translate-nb,
]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GHCR_TOKEN }}
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: translations-temp
- name: Move translations to src/locales
run: |
cp translations-temp/translations-zh/zh.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-ru/ru.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-pt/pt.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-fr/fr.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-es/es.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-de/de.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-hi/hi.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-bn/bn.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-ja/ja.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-vi/vi.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-tr/tr.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-ko/ko.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-it/it.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-he/he.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-ar/ar.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-pl/pl.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-nl/nl.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-sv/sv.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-id/id.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-th/th.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-uk/uk.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-cs/cs.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-ro/ro.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-el/el.json src/locales/ 2>/dev/null || true
cp translations-temp/translations-nb/nb.json src/locales/ 2>/dev/null || true
rm -rf translations-temp
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.GHCR_TOKEN }}
commit-message: "chore: auto-translate to multiple languages"
branch: translations-update
delete-branch: true
title: "chore: Update translations for all languages"

6
.gitignore vendored
View File

@@ -1,4 +1,3 @@
# Logs
logs
*.log
npm-debug.log*
@@ -12,7 +11,6 @@ dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
@@ -27,3 +25,7 @@ dist-ssr
/.claude/
/ssl/
.env
/.mcp.json
/nul
/.vscode/
/CLAUDE.md

1
.husky/commit-msg Normal file
View File

@@ -0,0 +1 @@
npx --no -- commitlint --edit $1

1
.husky/pre-commit Normal file
View File

@@ -0,0 +1 @@
npx lint-staged

2
.nvmrc
View File

@@ -1 +1 @@
22
20

View File

@@ -1,3 +1,18 @@
# Ignore artifacts:
build
coverage
dist
dist-ssr
release
node_modules
package-lock.json
pnpm-lock.yaml
yarn.lock
db
.env
*.min.js
*.min.css
openapi.json

View File

@@ -1 +1,9 @@
{}
{
"semi": true,
"singleQuote": false,
"tabWidth": 2,
"trailingComma": "all",
"printWidth": 80,
"arrowParens": "always",
"endOfLine": "lf"
}

View File

@@ -10,7 +10,7 @@
1. Clone the repository:
```sh
git clone https://github.com/LukeGus/Termix
git clone https://github.com/Termix-SSH/Termix
```
2. Install the dependencies:
```sh
@@ -31,7 +31,7 @@ This will start the backend and the frontend Vite server. You can access Termix
## Contributing
1. **Fork the repository**: Click the "Fork" button at the top right of
the [repository page](https://github.com/LukeGus/Termix).
the [repository page](https://github.com/Termix-SSH/Termix).
2. **Create a new branch**:
```sh
git checkout -b feature/my-new-feature
@@ -101,6 +101,6 @@ This will start the backend and the frontend Vite server. You can access Termix
## Support
If you need help with Termix, you can join the [Discord](https://discord.gg/jVQGdvHDrf) server and visit the support
channel. You can also open an issue or open a pull request on the [GitHub](https://github.com/LukeGus/Termix/issues)
repo.
If you need help or want to request a feature with Termix, visit the [Issues](https://github.com/Termix-SSH/Support/issues) page, log in, and press `New Issue`.
Please be as detailed as possible in your issue, preferably written in English. You can also join the [Discord](https://discord.gg/jVQGdvHDrf) server and visit the support
channel, however, response times may be longer.

24
Casks/termix.rb Normal file
View File

@@ -0,0 +1,24 @@
cask "termix" do
version "1.10.0"
sha256 "327c5026006c949f992447835aa6754113f731065b410bedbfa5da5af7cb2386"
url "https://github.com/Termix-SSH/Termix/releases/download/release-#{version}-tag/termix_macos_universal_dmg.dmg"
name "Termix"
desc "Web-based server management platform with SSH terminal, tunneling, and file editing"
homepage "https://github.com/Termix-SSH/Termix"
livecheck do
url :url
strategy :github_latest
end
app "Termix.app"
zap trash: [
"~/Library/Application Support/termix",
"~/Library/Caches/com.karmaa.termix",
"~/Library/Caches/com.karmaa.termix.ShipIt",
"~/Library/Preferences/com.karmaa.termix.plist",
"~/Library/Saved Application State/com.karmaa.termix.savedState",
]
end

View File

@@ -1,13 +1,13 @@
# 仓库统计
<p align="center">
<a href="README.md"><img src="https://flagcdn.com/us.svg" alt="English" width="24" height="16"> 英文</a> |
<a href="README.md"><img src="https://flagcdn.com/us.svg" alt="English" width="24" height="16"> 英文</a> |
<img src="https://flagcdn.com/cn.svg" alt="中文" width="24" height="16"> 中文
</p>
![GitHub Repo stars](https://img.shields.io/github/stars/LukeGus/Termix?style=flat&label=Stars)
![GitHub forks](https://img.shields.io/github/forks/LukeGus/Termix?style=flat&label=Forks)
![GitHub Release](https://img.shields.io/github/v/release/LukeGus/Termix?style=flat&label=Release)
![GitHub Repo stars](https://img.shields.io/github/stars/Termix-SSH/Termix?style=flat&label=Stars)
![GitHub forks](https://img.shields.io/github/forks/Termix-SSH/Termix?style=flat&label=Forks)
![GitHub Release](https://img.shields.io/github/v/release/Termix-SSH/Termix?style=flat&label=Release)
<a href="https://discord.gg/jVQGdvHDrf"><img alt="Discord" src="https://img.shields.io/discord/1347374268253470720"></a>
<p align="center">
@@ -29,7 +29,7 @@
<br />
<p align="center">
<a href="https://github.com/LukeGus/Termix">
<a href="https://github.com/Termix-SSH/Termix">
<img alt="Termix Banner" src=./repo-images/HeaderImage.png style="width: auto; height: auto;"> </a>
</p>
@@ -39,34 +39,65 @@
# 概览
<p align="center">
<a href="https://github.com/LukeGus/Termix">
<a href="https://github.com/Termix-SSH/Termix">
<img alt="Termix Banner" src=./public/icon.svg style="width: 250px; height: 250px;"> </a>
</p>
Termix 是一个开源、永久免费、自托管的一体化服务器管理平台。它提供了一个基于网页的解决方案通过一个直观的界面管理你的服务器和基础设施。Termix
提供 SSH 终端访问、SSH 隧道功能以及远程文件编辑,还会陆续添加更多工具。
提供 SSH 终端访问、SSH 隧道功能以及远程文件管理,还会陆续添加更多工具。Termix 是适用于所有平台的完美免费自托管 Termius 替代品。
# 功能
- **SSH 终端访问** - 功能完整的终端,支持分屏(最多 4 个面板)和标签系统
- **SSH 隧道管理** - 创建和管理 SSH 隧道,支持自动重和健康监控
- **远程文件编辑器** - 直接在远程服务器编辑文件,支持语法高亮和文件管理功能(上传、删除、重命名等)
- **SSH 主机管理** - 保存、组织和管理 SSH 连接,支持标签和文件夹
- **服务器统计** - 查看任意 SSH 服务器的 CPU、内存和硬盘使用情况
- **用户认证** - 安全的用户管理支持管理员控制、OIDC 和双因素认证TOTP
- **现代化界面** - 使用 React、Tailwind CSS 和 Shadcn 构建的简洁界面
- **语言支持** - 内置中英文支持
- **SSH 终端访问** - 功能齐全的终端,具有分屏支持(最多 4 个面板)和类似浏览器的选项卡系统。包括对自定义终端的支持,包括常见终端主题、字体和其他组件
- **SSH 隧道管理** - 创建和管理 SSH 隧道,具有自动重新连接和健康监控功能
- **远程文件管理器** - 直接在远程服务器上管理文件,支持查看和编辑代码、图像、音频和视频。无缝上传、下载、重命名、删除和移动文件
- **Docker 管理** - 启动、停止、暂停、删除容器。查看容器统计信息。使用 docker exec 终端控制容器。它不是用来替代 Portainer 或 Dockge而是用于简单管理你的容器而不是创建它们。
- **SSH 主机管理器** - 保存、组织和管理您的 SSH 连接,支持标签和文件夹,并轻松保存可重用的登录信息,同时能够自动部署 SSH 密钥
- **服务器统计** - 在任何 SSH 服务器上查看 CPU、内存和磁盘使用情况以及网络、正常运行时间和系统信息
- **仪表板** - 在仪表板上一目了然地查看服务器信息
- **RBAC** - 创建角色并在用户/角色之间共享主机
- **用户认证** - 安全的用户管理,具有管理员控制以及 OIDC 和 2FA (TOTP) 支持。查看所有平台上的活动用户会话并撤销权限。将您的 OIDC/本地帐户链接在一起。
- **数据库加密** - 后端存储为加密的 SQLite 数据库文件。查看[文档](https://docs.termix.site/security)了解更多信息。
- **数据导出/导入** - 导出和导入 SSH 主机、凭据和文件管理器数据
- **自动 SSL 设置** - 内置 SSL 证书生成和管理,支持 HTTPS 重定向
- **现代用户界面** - 使用 React、Tailwind CSS 和 Shadcn 构建的简洁的桌面/移动设备友好界面。可选择基于深色或浅色模式的用户界面。
- **语言** - 内置支持约 30 种语言(通过 Google 翻译批量翻译,结果可能有所不同)
- **平台支持** - 可作为 Web 应用程序、桌面应用程序Windows、Linux 和 macOS以及适用于 iOS 和 Android 的专用移动/平板电脑应用程序。
- **SSH 工具** - 创建可重用的命令片段,单击即可执行。在多个打开的终端上同时运行一个命令。
- **命令历史** - 自动完成并查看以前运行的 SSH 命令
- **命令面板** - 双击左 Shift 键可快速使用键盘访问 SSH 连接
- **SSH 功能丰富** - 支持跳板机、warpgate、基于 TOTP 的连接、SOCKS5、密码自动填充等。
# 计划功能
- **增强管理员控制** - 提供更精细的用户和管理员权限控制、共享主机等功能
- **主题定制** - 修改所有工具的主题风格
- **增强终端支持** - 添加更多终端协议,如 VNC 和 RDP有类似 Apache Guacamole 的 RDP 集成经验者请通过创建 issue 联系我)
- **移动端支持** - 支持移动应用或 Termix 网站移动版,让你在手机上管理服务器
查看 [项目](https://github.com/orgs/Termix-SSH/projects/2) 了解所有计划功能。如果你想贡献代码,请参阅 [贡献指南](https://github.com/Termix-SSH/Termix/blob/main/CONTRIBUTING.md)。
# 安装
访问 Termix [文档](https://docs.termix.site/install) 获取安装信息。或者可以参考以下示例 docker-compose 文件
支持的设备
- 网站(任何平台上的任何现代浏览器,如 Chrome、Safari 和 Firefox
- Windowsx64/ia32
- 便携版
- MSI 安装程序
- Chocolatey 软件包管理器(即将推出)
- Linuxx64/ia32
- 便携版
- AppImage
- Deb
- Flatpak即将推出
- macOSx64/ia32 on v12.0+
- Apple App Store即将推出
- DMG
- Homebrew即将推出
- iOS/iPadOSv15.1+
- Apple App Store
- ISO
- Androidv7.0+
- Google Play 商店
- APK
访问 Termix [文档](https://docs.termix.site/install) 了解有关如何在所有平台上安装 Termix 的更多信息。或者,在此处查看示例 Docker Compose 文件:
```yaml
services:
@@ -88,8 +119,9 @@ volumes:
# 支持
如果你需要 Termix 的帮助,可以加入 [Discord](https://discord.gg/jVQGdvHDrf)
服务器并访问支持频道。你也可以 [GitHub](https://github.com/LukeGus/Termix/issues) 仓库提交 issue 或 pull request。
如果你需要 Termix 的帮助或想要请求功能,请访问 [Issues](https://github.com/Termix-SSH/Support/issues) 页面,登录并点击 `New Issue`
请尽可能详细地描述你的问题,最好使用英语。你也可以加入 [Discord](https://discord.gg/jVQGdvHDrf) 服务器并访问支持
频道,但响应时间可能较长。
# 展示
@@ -99,17 +131,32 @@ volumes:
</p>
<p align="center">
<img src="./repo-images/Image 3.png" width="250" alt="Termix Demo 3"/>
<img src="./repo-images/Image 4.png" width="250" alt="Termix Demo 4"/>
<img src="./repo-images/Image 5.png" width="250" alt="Termix Demo 5"/>
<img src="./repo-images/Image 3.png" width="400" alt="Termix Demo 3"/>
<img src="./repo-images/Image 4.png" width="400" alt="Termix Demo 4"/>
</p>
<p align="center">
<video src="https://github.com/user-attachments/assets/f9caa061-10dc-4173-ae7d-c6d42f05cf56" width="800" controls>
<img src="./repo-images/Image 5.png" width="400" alt="Termix Demo 5"/>
<img src="./repo-images/Image 6.png" width="400" alt="Termix Demo 6"/>
</p>
<p align="center">
<img src="./repo-images/Image 7.png" width="400" alt="Termix Demo 7"/>
<img src="./repo-images/Image 8.png" width="400" alt="Termix Demo 8"/>
</p>
<p align="center">
<img src="./repo-images/Image 9.png" width="400" alt="Termix Demo 9"/>
<img src="./repo-images/Image 10.png" width="400" alt="Termix Demo 110"/>
</p>
<p align="center">
<video src="https://github.com/user-attachments/assets/88936e0d-2399-4122-8eee-c255c25da48c" width="800" controls>
你的浏览器不支持 video 标签。
</video>
</p>
某些视频和图像可能已过时或可能无法完美展示功能。
# 许可证
根据 Apache 2.0 许可证发布。更多信息请参见 LICENSE。
根据 Apache License Version 2.0 发布。更多信息请参见 LICENSE。

View File

@@ -5,9 +5,9 @@
<a href="README-CN.md"><img src="https://flagcdn.com/cn.svg" alt="中文" width="24" height="16"> 中文</a>
</p>
![GitHub Repo stars](https://img.shields.io/github/stars/LukeGus/Termix?style=flat&label=Stars)
![GitHub forks](https://img.shields.io/github/forks/LukeGus/Termix?style=flat&label=Forks)
![GitHub Release](https://img.shields.io/github/v/release/LukeGus/Termix?style=flat&label=Release)
![GitHub Repo stars](https://img.shields.io/github/stars/Termix-SSH/Termix?style=flat&label=Stars)
![GitHub forks](https://img.shields.io/github/forks/Termix-SSH/Termix?style=flat&label=Forks)
![GitHub Release](https://img.shields.io/github/v/release/Termix-SSH/Termix?style=flat&label=Release)
<a href="https://discord.gg/jVQGdvHDrf"><img alt="Discord" src="https://img.shields.io/discord/1347374268253470720"></a>
<p align="center">
@@ -29,7 +29,7 @@
<br />
<p align="center">
<a href="https://github.com/LukeGus/Termix">
<a href="https://github.com/Termix-SSH/Termix">
<img alt="Termix Banner" src=./repo-images/HeaderImage.png style="width: auto; height: auto;"> </a>
</p>
@@ -39,43 +39,65 @@ If you would like, you can support the project here!\
# Overview
<p align="center">
<a href="https://github.com/LukeGus/Termix">
<a href="https://github.com/Termix-SSH/Termix">
<img alt="Termix Banner" src=./public/icon.svg style="width: 250px; height: 250px;"> </a>
</p>
Termix is an open-source, forever-free, self-hosted all-in-one server management platform. It provides a web-based
Termix is an open-source, forever-free, self-hosted all-in-one server management platform. It provides a multi-platform
solution for managing your servers and infrastructure through a single, intuitive interface. Termix offers SSH terminal
access, SSH tunneling capabilities, and remote file management, with many more tools to come.
access, SSH tunneling capabilities, remote file management, and many other tools. Termix is the perfect
free and self-hosted alternative to Termius available for all platforms.
# Features
- **SSH Terminal Access** - Full-featured terminal with split-screen support (up to 4 panels) and tab system
- **SSH Terminal Access** - Full-featured terminal with split-screen support (up to 4 panels) with a browser-like tab system. Includes support for customizing the terminal including common terminal themes, fonts, and other components
- **SSH Tunnel Management** - Create and manage SSH tunnels with automatic reconnection and health monitoring
- **Remote File Manager** - Manage files directly on remote servers with support for viewing and editing code, images, audio, and video. Upload, download, rename, delete, and move files seamlessly.
- **SSH Host Manager** - Save, organize, and manage your SSH connections with tags and folders and easily save reusable login info while being able to automate the deploying of SSH keys
- **Server Stats** - View CPU, memory, and HDD usage on any SSH server
- **User Authentication** - Secure user management with admin controls and OIDC and 2FA (TOTP) support
- **Database Encryption** - SQLite database files encrypted at rest with automatic encryption/decryption
- **Data Export/Import** - Export and import SSH hosts, credentials, and file manager data with incremental sync
- **Remote File Manager** - Manage files directly on remote servers with support for viewing and editing code, images, audio, and video. Upload, download, rename, delete, and move files seamlessly
- **Docker Management** - Start, stop, pause, remove containers. View container stats. Control container using docker exec terminal. It was not made to replace Portainer or Dockge but rather to simply manage your containers compared to creating them.
- **SSH Host Manager** - Save, organize, and manage your SSH connections with tags and folders, and easily save reusable login info while being able to automate the deployment of SSH keys
- **Server Stats** - View CPU, memory, and disk usage along with network, uptime, and system information on any SSH server
- **Dashboard** - View server information at a glance on your dashboard
- **RBAC** - Create roles and share hosts across users/roles
- **User Authentication** - Secure user management with admin controls and OIDC and 2FA (TOTP) support. View active user sessions across all platforms and revoke permissions. Link your OIDC/Local accounts together.
- **Database Encryption** - Backend stored as encrypted SQLite database files. View [docs](https://docs.termix.site/security) for more.
- **Data Export/Import** - Export and import SSH hosts, credentials, and file manager data
- **Automatic SSL Setup** - Built-in SSL certificate generation and management with HTTPS redirects
- **Modern UI** - Clean desktop/mobile-friendly interface built with React, Tailwind CSS, and Shadcn
- **Languages** - Built-in support for English, Chinese, and German
- **Platform Support** - Available as a web app, desktop application (Windows & Linux), and dedicated mobile app for iOS and Android. macOS and iPadOS support is planned.
- **Modern UI** - Clean desktop/mobile-friendly interface built with React, Tailwind CSS, and Shadcn. Choose between dark or light mode based UI.
- **Languages** - Built-in support ~30 languages (bulk translated via Google Translate, results may vary ofc)
- **Platform Support** - Available as a web app, desktop application (Windows, Linux, and macOS), and dedicated mobile/tablet app for iOS and Android.
- **SSH Tools** - Create reusable command snippets that execute with a single click. Run one command simultaneously across multiple open terminals.
- **Command History** - Auto-complete and view previously ran SSH commands
- **Command Palette** - Double tap left shift to quickly access SSH connections with your keyboard
- **SSH Feature Rich** - Supports jump hosts, warpgate, TOTP based connections, SOCKS5, password autofill, etc.
# Planned Features
See [Projects](https://github.com/users/LukeGus/projects/3) for all planned features. If you are looking to contribute, see [Contributing](https://github.com/LukeGus/Termix/blob/main/CONTRIBUTING.md).
See [Projects](https://github.com/orgs/Termix-SSH/projects/2) for all planned features. If you are looking to contribute, see [Contributing](https://github.com/Termix-SSH/Termix/blob/main/CONTRIBUTING.md).
# Installation
Supported Devices:
- Website (any modern browser like Google, Safari, and Firefox)
- Windows (app)
- Linux (app)
- iOS (app)
- Android (app)
- iPadOS and macOS are in progress
- Website (any modern browser on any platform like Chrome, Safari, and Firefox)
- Windows (x64/ia32)
- Portable
- MSI Installer
- Chocolatey Package Manager
- Linux (x64/ia32)
- Portable [(AUR available)](https://aur.archlinux.org/packages/termix-bin)
- AppImage
- Deb
- Flatpak
- macOS (x64/ia32 on v12.0+)
- Apple App Store
- DMG
- Homebrew
- iOS/iPadOS (v15.1+)
- Apple App Store
- ISO
- Android (v7.0+)
- Google Play Store
- APK
Visit the Termix [Docs](https://docs.termix.site/install) for more information on how to install Termix on all platforms. Otherwise, view
a sample Docker Compose file here:
@@ -100,11 +122,11 @@ volumes:
# Support
If you need help with Termix, you can join the [Discord](https://discord.gg/jVQGdvHDrf) server and visit the support
channel. You can also open an issue or open a pull request on the [GitHub](https://github.com/LukeGus/Termix/issues)
repo.
If you need help or want to request a feature with Termix, visit the [Issues](https://github.com/Termix-SSH/Support/issues) page, log in, and press `New Issue`.
Please be as detailed as possible in your issue, preferably written in English. You can also join the [Discord](https://discord.gg/jVQGdvHDrf) server and visit the support
channel, however, response times may be longer.
# Show-off
# Screenshots
<p align="center">
<img src="./repo-images/Image 1.png" width="400" alt="Termix Demo 1"/>
@@ -123,6 +145,12 @@ repo.
<p align="center">
<img src="./repo-images/Image 7.png" width="400" alt="Termix Demo 7"/>
<img src="./repo-images/Image 8.png" width="400" alt="Termix Demo 8"/>
</p>
<p align="center">
<img src="./repo-images/Image 9.png" width="400" alt="Termix Demo 9"/>
<img src="./repo-images/Image 10.png" width="400" alt="Termix Demo 110"/>
</p>
<p align="center">
@@ -130,6 +158,7 @@ repo.
Your browser does not support the video tag.
</video>
</p>
Some videos and images may be out of date or may not perfectly showcase features.
# License

View File

@@ -2,4 +2,4 @@
## Reporting a Vulnerability
Please report any vulnerabilities to [GitHub Security](https://github.com/LukeGus/Termix/security/advisories).
Please report any vulnerabilities to [GitHub Security](https://github.com/Termix-SSH/Termix/security/advisories).

Binary file not shown.

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
<true/>
</dict>
</plist>

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
<true/>
</dict>
</plist>

View File

@@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.inherit</key>
<true/>
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
</dict>
</plist>

View File

@@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.network.client</key>
<true/>
<key>com.apple.security.network.server</key>
<true/>
<key>com.apple.security.files.user-selected.read-write</key>
<true/>
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
</dict>
</plist>

31
build/notarize.cjs Normal file
View File

@@ -0,0 +1,31 @@
const { notarize } = require('@electron/notarize');
exports.default = async function notarizing(context) {
const { electronPlatformName, appOutDir } = context;
if (electronPlatformName !== 'darwin') {
return;
}
const appleId = process.env.APPLE_ID;
const appleIdPassword = process.env.APPLE_ID_PASSWORD;
const teamId = process.env.APPLE_TEAM_ID;
if (!appleId || !appleIdPassword || !teamId) {
return;
}
const appName = context.packager.appInfo.productFilename;
try {
await notarize({
appBundleId: 'com.karmaa.termix',
appPath: `${appOutDir}/${appName}.app`,
appleId: appleId,
appleIdPassword: appleIdPassword,
teamId: teamId,
});
} catch (error) {
console.error('Notarization failed:', error);
}
};

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">
<metadata>
<id>termix-ssh</id>
<version>VERSION_PLACEHOLDER</version>
<packageSourceUrl>https://github.com/Termix-SSH/Termix</packageSourceUrl>
<owners>bugattiguy527</owners>
<title>Termix SSH</title>
<authors>bugattiguy527</authors>
<projectUrl>https://github.com/Termix-SSH/Termix</projectUrl>
<iconUrl>https://raw.githubusercontent.com/Termix-SSH/Termix/main/public/icon.png</iconUrl>
<licenseUrl>https://raw.githubusercontent.com/Termix-SSH/Termix/refs/heads/main/LICENSE</licenseUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<projectSourceUrl>https://github.com/Termix-SSH/Termix</projectSourceUrl>
<docsUrl>https://docs.termix.site/install</docsUrl>
<bugTrackerUrl>https://github.com/Termix-SSH/Support/issues</bugTrackerUrl>
<tags>docker ssh self-hosted file-management ssh-tunnel termix server-management terminal</tags>
<summary>Termix is a web-based server management platform with SSH terminal, tunneling, and file editing capabilities.</summary>
<description>
Termix is an open-source, forever-free, self-hosted all-in-one server management platform. It provides a web-based solution for managing your servers and infrastructure through a single, intuitive interface.
Termix offers:
- SSH terminal access
- SSH tunneling capabilities
- Remote file management
- Server monitoring and management
This package installs the desktop application version of Termix.
</description>
<releaseNotes>https://github.com/Termix-SSH/Termix/releases</releaseNotes>
</metadata>
<files>
<file src="tools\**" target="tools" />
</files>
</package>

View File

@@ -0,0 +1,20 @@
$ErrorActionPreference = 'Stop'
$packageName = 'termix-ssh'
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$url64 = 'DOWNLOAD_URL_PLACEHOLDER'
$checksum64 = 'CHECKSUM_PLACEHOLDER'
$checksumType64 = 'sha256'
$packageArgs = @{
packageName = $packageName
fileType = 'msi'
url64bit = $url64
softwareName = 'Termix*'
checksum64 = $checksum64
checksumType64 = $checksumType64
silentArgs = "/qn /norestart /l*v `"$($env:TEMP)\$($packageName).$($env:chocolateyPackageVersion).MsiInstall.log`""
validExitCodes = @(0, 3010, 1641)
}
Install-ChocolateyPackage @packageArgs

View File

@@ -0,0 +1,33 @@
$ErrorActionPreference = 'Stop'
$packageName = 'termix-ssh'
$softwareName = 'Termix*'
$installerType = 'msi'
$silentArgs = '/qn /norestart'
$validExitCodes = @(0, 3010, 1605, 1614, 1641)
[array]$key = Get-UninstallRegistryKey -SoftwareName $softwareName
if ($key.Count -eq 1) {
$key | % {
$file = "$($_.UninstallString)"
if ($installerType -eq 'msi') {
$silentArgs = "$($_.PSChildName) $silentArgs"
$file = ''
}
Uninstall-ChocolateyPackage -PackageName $packageName `
-FileType $installerType `
-SilentArgs "$silentArgs" `
-ValidExitCodes $validExitCodes `
-File "$file"
}
} elseif ($key.Count -eq 0) {
Write-Warning "$packageName has already been uninstalled by other means."
} elseif ($key.Count -gt 1) {
Write-Warning "$($key.Count) matches found!"
Write-Warning "To prevent accidental data loss, no programs will be uninstalled."
$key | % {Write-Warning "- $($_.DisplayName)"}
}

3
crowdin.yml Normal file
View File

@@ -0,0 +1,3 @@
files:
- source: /src/locales/en.json
translation: /src/locales/translated/%two_letters_code%.json

View File

@@ -2,16 +2,12 @@
FROM node:22-slim AS deps
WORKDIR /app
RUN apt-get update && apt-get install -y python3 make g++ && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y python3 make g++ && rm -rf /var/lib/apt/lists/*
COPY package*.json ./
ENV npm_config_target_platform=linux
ENV npm_config_target_arch=x64
ENV npm_config_target_libc=glibc
RUN rm -rf node_modules package-lock.json && \
npm install --force && \
npm install --ignore-scripts --force && \
npm cache clean --force
# Stage 2: Build frontend
@@ -23,7 +19,7 @@ COPY . .
RUN find public/fonts -name "*.ttf" ! -name "*Regular.ttf" ! -name "*Bold.ttf" ! -name "*Italic.ttf" -delete
RUN npm cache clean --force && \
npm run build
NODE_OPTIONS="--max-old-space-size=3072" npm run build
# Stage 3: Build backend
FROM deps AS backend-builder
@@ -31,10 +27,6 @@ WORKDIR /app
COPY . .
ENV npm_config_target_platform=linux
ENV npm_config_target_arch=x64
ENV npm_config_target_libc=glibc
RUN npm rebuild better-sqlite3 --force
RUN npm run build:backend
@@ -47,10 +39,6 @@ RUN apt-get update && apt-get install -y python3 make g++ && rm -rf /var/lib/apt
COPY package*.json ./
ENV npm_config_target_platform=linux
ENV npm_config_target_arch=x64
ENV npm_config_target_libc=glibc
RUN npm ci --only=production --ignore-scripts --force && \
npm rebuild better-sqlite3 bcryptjs --force && \
npm cache clean --force
@@ -65,16 +53,18 @@ ENV DATA_DIR=/app/data \
RUN apt-get update && apt-get install -y nginx gettext-base openssl && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /app/data /app/uploads && \
chown -R node:node /app/data /app/uploads && \
useradd -r -s /bin/false nginx
mkdir -p /app/data /app/uploads /app/nginx /app/nginx/logs /app/nginx/cache /app/nginx/client_body && \
chown -R node:node /app && \
chmod 755 /app/data /app/uploads /app/nginx && \
touch /app/nginx/nginx.conf && \
chown node:node /app/nginx/nginx.conf
COPY docker/nginx.conf /etc/nginx/nginx.conf
COPY docker/nginx-https.conf /etc/nginx/nginx-https.conf
COPY docker/nginx.conf /app/nginx/nginx.conf.template
COPY docker/nginx-https.conf /app/nginx/nginx-https.conf.template
COPY --chown=nginx:nginx --from=frontend-builder /app/dist /usr/share/nginx/html
COPY --chown=nginx:nginx --from=frontend-builder /app/src/locales /usr/share/nginx/html/locales
COPY --chown=nginx:nginx --from=frontend-builder /app/public/fonts /usr/share/nginx/html/fonts
COPY --chown=node:node --from=frontend-builder /app/dist /app/html
COPY --chown=node:node --from=frontend-builder /app/src/locales /app/html/locales
COPY --chown=node:node --from=frontend-builder /app/public/fonts /app/html/fonts
COPY --chown=node:node --from=production-deps /app/node_modules /app/node_modules
COPY --chown=node:node --from=backend-builder /app/dist/backend ./dist/backend
@@ -82,8 +72,11 @@ COPY --chown=node:node package.json ./
VOLUME ["/app/data"]
EXPOSE ${PORT} 30001 30002 30003 30004 30005
EXPOSE ${PORT} 30001 30002 30003 30004 30005 30006
COPY docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
CMD ["/entrypoint.sh"]
USER node
CMD ["/entrypoint.sh"]

View File

@@ -11,24 +11,21 @@ echo "Configuring web UI to run on port: $PORT"
if [ "$ENABLE_SSL" = "true" ]; then
echo "SSL enabled - using HTTPS configuration with redirect"
NGINX_CONF_SOURCE="/etc/nginx/nginx-https.conf"
NGINX_CONF_SOURCE="/app/nginx/nginx-https.conf.template"
else
echo "SSL disabled - using HTTP-only configuration (default)"
NGINX_CONF_SOURCE="/etc/nginx/nginx.conf"
NGINX_CONF_SOURCE="/app/nginx/nginx.conf.template"
fi
envsubst '${PORT} ${SSL_PORT} ${SSL_CERT_PATH} ${SSL_KEY_PATH}' < $NGINX_CONF_SOURCE > /etc/nginx/nginx.conf.tmp
mv /etc/nginx/nginx.conf.tmp /etc/nginx/nginx.conf
envsubst '${PORT} ${SSL_PORT} ${SSL_CERT_PATH} ${SSL_KEY_PATH}' < $NGINX_CONF_SOURCE > /app/nginx/nginx.conf
mkdir -p /app/data /app/uploads
chown -R node:node /app/data /app/uploads
chmod 755 /app/data /app/uploads
chmod 755 /app/data /app/uploads 2>/dev/null || true
if [ "$ENABLE_SSL" = "true" ]; then
echo "Checking SSL certificate configuration..."
mkdir -p /app/data/ssl
chown -R node:node /app/data/ssl
chmod 755 /app/data/ssl
chmod 755 /app/data/ssl 2>/dev/null || true
DOMAIN=${SSL_DOMAIN:-localhost}
@@ -84,7 +81,6 @@ EOF
chmod 600 /app/data/ssl/termix.key
chmod 644 /app/data/ssl/termix.crt
chown node:node /app/data/ssl/termix.key /app/data/ssl/termix.crt
rm -f /app/data/ssl/openssl.conf
@@ -93,7 +89,7 @@ EOF
fi
echo "Starting nginx..."
nginx
nginx -c /app/nginx/nginx.conf
echo "Starting backend services..."
cd /app
@@ -110,11 +106,7 @@ else
echo "Warning: package.json not found"
fi
if command -v su-exec > /dev/null 2>&1; then
su-exec node node dist/backend/backend/starter.js
else
su -s /bin/sh node -c "node dist/backend/backend/starter.js"
fi
node dist/backend/backend/starter.js
echo "All services started"

View File

@@ -1,11 +1,22 @@
pid /app/nginx/nginx.pid;
error_log /app/nginx/logs/error.log warn;
events {
worker_connections 1024;
}
http {
include mime.types;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /app/nginx/logs/access.log;
client_body_temp_path /app/nginx/client_body;
proxy_temp_path /app/nginx/proxy_temp;
fastcgi_temp_path /app/nginx/fastcgi_temp;
uwsgi_temp_path /app/nginx/uwsgi_temp;
scgi_temp_path /app/nginx/scgi_temp;
sendfile on;
keepalive_timeout 65;
client_header_timeout 300s;
@@ -34,13 +45,20 @@ http {
ssl_certificate_key ${SSL_KEY_PATH};
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
root /app/html;
expires 1y;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
location / {
root /usr/share/nginx/html;
root /app/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location ~* \.map$ {
@@ -49,6 +67,15 @@ http {
log_not_found off;
}
location ~ ^/users/sessions(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/users(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
@@ -85,6 +112,15 @@ http {
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/rbac(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/credentials(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
@@ -92,27 +128,45 @@ http {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
location ~ ^/database(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
location ~ ^/snippets(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/terminal(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/database(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -120,18 +174,18 @@ http {
location ~ ^/db(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -216,18 +270,18 @@ http {
location /ssh/file_manager/ssh/ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30004;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -257,11 +311,69 @@ http {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
location ~ ^/uptime(/.*)?$ {
proxy_pass http://127.0.0.1:30006;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/activity(/.*)?$ {
proxy_pass http://127.0.0.1:30006;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ^~ /docker/console/ {
proxy_pass http://127.0.0.1:30008/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 10s;
proxy_buffering off;
proxy_request_buffering off;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
location ~ ^/docker(/.*)?$ {
proxy_pass http://127.0.0.1:30007;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
root /app/html;
}
}
}
}

View File

@@ -1,11 +1,22 @@
pid /app/nginx/nginx.pid;
error_log /app/nginx/logs/error.log warn;
events {
worker_connections 1024;
}
http {
include mime.types;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /app/nginx/logs/access.log;
client_body_temp_path /app/nginx/client_body;
proxy_temp_path /app/nginx/proxy_temp;
fastcgi_temp_path /app/nginx/fastcgi_temp;
uwsgi_temp_path /app/nginx/uwsgi_temp;
scgi_temp_path /app/nginx/scgi_temp;
sendfile on;
keepalive_timeout 65;
client_header_timeout 300s;
@@ -23,13 +34,20 @@ http {
listen ${PORT};
server_name localhost;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
root /app/html;
expires 1y;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
location / {
root /usr/share/nginx/html;
root /app/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location ~* \.map$ {
@@ -38,6 +56,15 @@ http {
log_not_found off;
}
location ~ ^/users/sessions(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/users(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
@@ -74,6 +101,15 @@ http {
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/rbac(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/credentials(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
@@ -81,27 +117,45 @@ http {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
location ~ ^/database(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
location ~ ^/snippets(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/terminal(/.*)?$ {
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/database(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -109,18 +163,18 @@ http {
location ~ ^/db(/.*)?$ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -205,18 +259,18 @@ http {
location /ssh/file_manager/ssh/ {
client_max_body_size 5G;
client_body_timeout 300s;
proxy_pass http://127.0.0.1:30004;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_request_buffering off;
proxy_buffering off;
}
@@ -246,11 +300,69 @@ http {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
location ~ ^/uptime(/.*)?$ {
proxy_pass http://127.0.0.1:30006;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/activity(/.*)?$ {
proxy_pass http://127.0.0.1:30006;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ^~ /docker/console/ {
proxy_pass http://127.0.0.1:30008/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 10s;
proxy_buffering off;
proxy_request_buffering off;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
location ~ ^/docker(/.*)?$ {
proxy_pass http://127.0.0.1:30007;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
root /app/html;
}
}
}
}

View File

@@ -1,6 +1,7 @@
{
"appId": "com.termix.app",
"appId": "com.karmaa.termix",
"productName": "Termix",
"publish": null,
"directories": {
"output": "release"
},
@@ -21,35 +22,53 @@
},
"buildDependenciesFromSource": false,
"nodeGypRebuild": false,
"npmRebuild": false,
"npmRebuild": true,
"win": {
"target": "nsis",
"target": [
{
"target": "nsis",
"arch": ["x64", "ia32"]
},
{
"target": "msi",
"arch": ["x64", "ia32"]
}
],
"icon": "public/icon.ico",
"executableName": "Termix"
},
"nsis": {
"oneClick": false,
"allowToChangeInstallationDirectory": true,
"artifactName": "${productName}-Setup-${version}.${ext}",
"artifactName": "termix_windows_${arch}_nsis.${ext}",
"createDesktopShortcut": true,
"createStartMenuShortcut": true,
"shortcutName": "Termix",
"uninstallDisplayName": "Termix"
},
"msi": {
"artifactName": "termix_windows_${arch}_msi.${ext}"
},
"linux": {
"artifactName": "termix_linux_${arch}_portable.${ext}",
"target": [
{
"target": "AppImage",
"arch": ["x64"]
"arch": ["x64", "arm64", "armv7l"]
},
{
"target": "deb",
"arch": ["x64", "arm64", "armv7l"]
},
{
"target": "tar.gz",
"arch": ["x64"]
"arch": ["x64", "arm64", "armv7l"]
}
],
"icon": "public/icon.png",
"category": "Development",
"executableName": "termix",
"maintainer": "Termix <mail@termix.site>",
"desktop": {
"entry": {
"Name": "Termix",
@@ -58,5 +77,53 @@
"StartupWMClass": "termix"
}
}
}
},
"appImage": {
"artifactName": "termix_linux_${arch}_appimage.${ext}"
},
"deb": {
"artifactName": "termix_linux_${arch}_deb.${ext}"
},
"mac": {
"target": [
{
"target": "mas",
"arch": "universal"
},
{
"target": "dmg",
"arch": ["universal", "x64", "arm64"]
}
],
"icon": "public/icon.icns",
"category": "public.app-category.developer-tools",
"hardenedRuntime": true,
"gatekeeperAssess": false,
"entitlements": "build/entitlements.mac.plist",
"entitlementsInherit": "build/entitlements.mac.inherit.plist",
"type": "distribution",
"minimumSystemVersion": "10.15"
},
"dmg": {
"artifactName": "termix_macos_${arch}_dmg.${ext}",
"sign": true
},
"afterSign": "build/notarize.cjs",
"mas": {
"provisioningProfile": "build/Termix_Mac_App_Store.provisionprofile",
"entitlements": "build/entitlements.mas.plist",
"entitlementsInherit": "build/entitlements.mas.inherit.plist",
"hardenedRuntime": false,
"gatekeeperAssess": false,
"asarUnpack": ["**/*.node"],
"type": "distribution",
"category": "public.app-category.developer-tools",
"artifactName": "termix_macos_${arch}_mas.${ext}",
"extendInfo": {
"ITSAppUsesNonExemptEncryption": false,
"NSAppleEventsUsageDescription": "Termix needs access to control other applications for terminal operations."
}
},
"generateUpdatesFilesForAllChannels": true
}

View File

@@ -1,22 +1,30 @@
const { app, BrowserWindow, shell, ipcMain, dialog } = require("electron");
const {
app,
BrowserWindow,
shell,
ipcMain,
dialog,
Menu,
} = require("electron");
const path = require("path");
const fs = require("fs");
const os = require("os");
if (process.platform === "linux") {
app.commandLine.appendSwitch("--ozone-platform-hint=auto");
app.commandLine.appendSwitch("--enable-features=VaapiVideoDecoder");
}
app.commandLine.appendSwitch("--ignore-certificate-errors");
app.commandLine.appendSwitch("--ignore-ssl-errors");
app.commandLine.appendSwitch("--ignore-certificate-errors-spki-list");
app.commandLine.appendSwitch("--enable-features=NetworkService");
if (process.platform === "linux") {
app.commandLine.appendSwitch("--no-sandbox");
app.commandLine.appendSwitch("--disable-setuid-sandbox");
app.commandLine.appendSwitch("--disable-dev-shm-usage");
}
let mainWindow = null;
const isDev = process.env.NODE_ENV === "development" || !app.isPackaged;
const appRoot = isDev ? process.cwd() : path.join(__dirname, "..");
const gotTheLock = app.requestSingleInstanceLock();
if (!gotTheLock) {
@@ -34,40 +42,131 @@ if (!gotTheLock) {
}
function createWindow() {
const appVersion = app.getVersion();
const electronVersion = process.versions.electron;
const platform =
process.platform === "win32"
? "Windows"
: process.platform === "darwin"
? "macOS"
: "Linux";
mainWindow = new BrowserWindow({
width: 1200,
height: 800,
minWidth: 800,
minHeight: 600,
title: "Termix",
icon: isDev
? path.join(__dirname, "..", "public", "icon.png")
: path.join(process.resourcesPath, "public", "icon.png"),
icon: path.join(appRoot, "public", "icon.png"),
webPreferences: {
nodeIntegration: false,
contextIsolation: true,
webSecurity: true,
webSecurity: false,
preload: path.join(__dirname, "preload.js"),
partition: "persist:termix",
allowRunningInsecureContent: true,
webviewTag: true,
offscreen: false,
},
show: false,
show: true,
});
if (process.platform !== "darwin") {
mainWindow.setMenuBarVisibility(false);
}
const customUserAgent = `Termix-Desktop/${appVersion} (${platform}; Electron/${electronVersion})`;
mainWindow.webContents.setUserAgent(customUserAgent);
mainWindow.webContents.session.webRequest.onBeforeSendHeaders(
(details, callback) => {
details.requestHeaders["X-Electron-App"] = "true";
details.requestHeaders["User-Agent"] = customUserAgent;
callback({ requestHeaders: details.requestHeaders });
},
);
if (isDev) {
mainWindow.loadURL("http://localhost:5173");
mainWindow.webContents.openDevTools();
} else {
const indexPath = path.join(__dirname, "..", "dist", "index.html");
mainWindow.loadFile(indexPath);
const indexPath = path.join(appRoot, "dist", "index.html");
mainWindow.loadFile(indexPath).catch((err) => {
console.error("Failed to load file:", err);
});
}
mainWindow.webContents.session.webRequest.onHeadersReceived(
(details, callback) => {
const headers = details.responseHeaders;
if (headers) {
delete headers["x-frame-options"];
delete headers["X-Frame-Options"];
if (headers["content-security-policy"]) {
headers["content-security-policy"] = headers[
"content-security-policy"
]
.map((value) => value.replace(/frame-ancestors[^;]*/gi, ""))
.filter((value) => value.trim().length > 0);
if (headers["content-security-policy"].length === 0) {
delete headers["content-security-policy"];
}
}
if (headers["Content-Security-Policy"]) {
headers["Content-Security-Policy"] = headers[
"Content-Security-Policy"
]
.map((value) => value.replace(/frame-ancestors[^;]*/gi, ""))
.filter((value) => value.trim().length > 0);
if (headers["Content-Security-Policy"].length === 0) {
delete headers["Content-Security-Policy"];
}
}
if (headers["set-cookie"]) {
headers["set-cookie"] = headers["set-cookie"].map((cookie) => {
let modified = cookie.replace(
/;\s*SameSite=Strict/gi,
"; SameSite=None",
);
modified = modified.replace(
/;\s*SameSite=Lax/gi,
"; SameSite=None",
);
if (!modified.includes("SameSite=")) {
modified += "; SameSite=None";
}
if (
!modified.includes("Secure") &&
details.url.startsWith("https")
) {
modified += "; Secure";
}
return modified;
});
}
}
callback({ responseHeaders: headers });
},
);
mainWindow.once("ready-to-show", () => {
mainWindow.show();
});
setTimeout(() => {
if (mainWindow && !mainWindow.isVisible()) {
mainWindow.show();
}
}, 3000);
mainWindow.webContents.on(
"did-fail-load",
(event, errorCode, errorDescription, validatedURL) => {
@@ -84,13 +183,6 @@ function createWindow() {
console.log("Frontend loaded successfully");
});
mainWindow.on("close", (event) => {
if (process.platform === "darwin") {
event.preventDefault();
mainWindow.hide();
}
});
mainWindow.on("closed", () => {
mainWindow = null;
});
@@ -106,11 +198,11 @@ ipcMain.handle("get-app-version", () => {
});
const GITHUB_API_BASE = "https://api.github.com";
const REPO_OWNER = "LukeGus";
const REPO_OWNER = "Termix-SSH";
const REPO_NAME = "Termix";
const githubCache = new Map();
const CACHE_DURATION = 30 * 60 * 1000; // 30 minutes
const CACHE_DURATION = 30 * 60 * 1000;
async function fetchGitHubAPI(endpoint, cacheKey) {
const cached = githubCache.get(cacheKey);
@@ -299,6 +391,48 @@ ipcMain.handle("save-server-config", (event, config) => {
}
});
ipcMain.handle("get-setting", (event, key) => {
try {
const userDataPath = app.getPath("userData");
const settingsPath = path.join(userDataPath, "settings.json");
if (!fs.existsSync(settingsPath)) {
return null;
}
const settingsData = fs.readFileSync(settingsPath, "utf8");
const settings = JSON.parse(settingsData);
return settings[key] !== undefined ? settings[key] : null;
} catch (error) {
console.error("Error reading setting:", error);
return null;
}
});
ipcMain.handle("set-setting", (event, key, value) => {
try {
const userDataPath = app.getPath("userData");
const settingsPath = path.join(userDataPath, "settings.json");
if (!fs.existsSync(userDataPath)) {
fs.mkdirSync(userDataPath, { recursive: true });
}
let settings = {};
if (fs.existsSync(settingsPath)) {
const settingsData = fs.readFileSync(settingsPath, "utf8");
settings = JSON.parse(settingsData);
}
settings[key] = value;
fs.writeFileSync(settingsPath, JSON.stringify(settings, null, 2));
return { success: true };
} catch (error) {
console.error("Error saving setting:", error);
return { success: false, error: error.message };
}
});
ipcMain.handle("test-server-connection", async (event, serverUrl) => {
try {
const https = require("https");
@@ -462,21 +596,78 @@ ipcMain.handle("test-server-connection", async (event, serverUrl) => {
}
});
function createMenu() {
if (process.platform === "darwin") {
const template = [
{
label: app.name,
submenu: [
{ role: "about" },
{ type: "separator" },
{ role: "services" },
{ type: "separator" },
{ role: "hide" },
{ role: "hideOthers" },
{ role: "unhide" },
{ type: "separator" },
{ role: "quit" },
],
},
{
label: "Edit",
submenu: [
{ role: "undo" },
{ role: "redo" },
{ type: "separator" },
{ role: "cut" },
{ role: "copy" },
{ role: "paste" },
{ role: "selectAll" },
],
},
{
label: "View",
submenu: [
{ role: "reload" },
{ role: "forceReload" },
{ role: "toggleDevTools" },
{ type: "separator" },
{ role: "resetZoom" },
{ role: "zoomIn" },
{ role: "zoomOut" },
{ type: "separator" },
{ role: "togglefullscreen" },
],
},
{
label: "Window",
submenu: [
{ role: "minimize" },
{ role: "zoom" },
{ type: "separator" },
{ role: "front" },
{ type: "separator" },
{ role: "window" },
],
},
];
const menu = Menu.buildFromTemplate(template);
Menu.setApplicationMenu(menu);
}
}
app.whenReady().then(() => {
createMenu();
createWindow();
});
app.on("window-all-closed", () => {
if (process.platform !== "darwin") {
app.quit();
}
app.quit();
});
app.on("activate", () => {
if (BrowserWindow.getAllWindows().length === 0) {
createWindow();
} else if (mainWindow) {
mainWindow.show();
}
});

View File

@@ -2,26 +2,14 @@ const { contextBridge, ipcRenderer } = require("electron");
contextBridge.exposeInMainWorld("electronAPI", {
getAppVersion: () => ipcRenderer.invoke("get-app-version"),
getPlatform: () => ipcRenderer.invoke("get-platform"),
checkElectronUpdate: () => ipcRenderer.invoke("check-electron-update"),
getServerConfig: () => ipcRenderer.invoke("get-server-config"),
saveServerConfig: (config) =>
ipcRenderer.invoke("save-server-config", config),
testServerConnection: (serverUrl) =>
ipcRenderer.invoke("test-server-connection", serverUrl),
showSaveDialog: (options) => ipcRenderer.invoke("show-save-dialog", options),
showOpenDialog: (options) => ipcRenderer.invoke("show-open-dialog", options),
onUpdateAvailable: (callback) => ipcRenderer.on("update-available", callback),
onUpdateDownloaded: (callback) =>
ipcRenderer.on("update-downloaded", callback),
removeAllListeners: (channel) => ipcRenderer.removeAllListeners(channel),
isElectron: true,
isDev: process.env.NODE_ENV === "development",
getSetting: (key) => ipcRenderer.invoke("get-setting", key),
setSetting: (key, value) => ipcRenderer.invoke("set-setting", key, value),
invoke: (channel, ...args) => ipcRenderer.invoke(channel, ...args),
});

View File

@@ -0,0 +1,11 @@
[Desktop Entry]
Name=Termix
Comment=Web-based server management platform with SSH terminal, tunneling, and file editing
Exec=run.sh %U
Icon=com.karmaa.termix
Terminal=false
Type=Application
Categories=Development;Network;System;
Keywords=ssh;terminal;server;management;tunnel;
StartupWMClass=termix
StartupNotify=true

View File

@@ -0,0 +1,12 @@
[Flatpak Ref]
Name=Termix
Branch=stable
Title=Termix - SSH Server Management Platform
IsRuntime=false
Url=https://github.com/Termix-SSH/Termix/releases/download/VERSION_PLACEHOLDER/termix_linux_flatpak.flatpak
GPGKey=
RuntimeRepo=https://flathub.org/repo/flathub.flatpakrepo
Comment=Web-based server management platform with SSH terminal, tunneling, and file editing
Description=Termix is an open-source, forever-free, self-hosted all-in-one server management platform. It provides SSH terminal access, tunneling capabilities, and remote file management.
Icon=https://raw.githubusercontent.com/Termix-SSH/Termix/main/public/icon.png
Homepage=https://github.com/Termix-SSH/Termix

View File

@@ -0,0 +1,77 @@
<?xml version="1.0" encoding="UTF-8"?>
<component type="desktop-application">
<id>com.karmaa.termix</id>
<name>Termix</name>
<summary>Web-based server management platform with SSH terminal, tunneling, and file editing</summary>
<metadata_license>CC0-1.0</metadata_license>
<project_license>Apache-2.0</project_license>
<developer_name>bugattiguy527</developer_name>
<description>
<p>
Termix is an open-source, forever-free, self-hosted all-in-one server management platform.
It provides a web-based solution for managing your servers and infrastructure through a single, intuitive interface.
</p>
<p>Features:</p>
<ul>
<li>SSH terminal access with full terminal emulation</li>
<li>SSH tunneling capabilities for secure port forwarding</li>
<li>Remote file management with editor support</li>
<li>Server monitoring and management tools</li>
<li>Self-hosted solution - keep your data private</li>
<li>Modern, intuitive web interface</li>
</ul>
</description>
<launchable type="desktop-id">com.karmaa.termix.desktop</launchable>
<screenshots>
<screenshot type="default">

<caption>SSH Terminal Interface</caption>
</screenshot>
</screenshots>
<url type="homepage">https://github.com/Termix-SSH/Termix</url>
<url type="bugtracker">https://github.com/Termix-SSH/Support/issues</url>
<url type="help">https://docs.termix.site</url>
<url type="vcs-browser">https://github.com/Termix-SSH/Termix</url>
<content_rating type="oars-1.1">
<content_attribute id="social-info">moderate</content_attribute>
</content_rating>
<releases>
<release version="VERSION_PLACEHOLDER" date="DATE_PLACEHOLDER">
<description>
<p>Latest release of Termix</p>
</description>
<url>https://github.com/Termix-SSH/Termix/releases</url>
</release>
</releases>
<categories>
<category>Development</category>
<category>Network</category>
<category>System</category>
</categories>
<keywords>
<keyword>ssh</keyword>
<keyword>terminal</keyword>
<keyword>server</keyword>
<keyword>management</keyword>
<keyword>tunnel</keyword>
<keyword>file-manager</keyword>
</keywords>
<provides>
<binary>termix</binary>
</provides>
<requires>
<internet>always</internet>
</requires>
</component>

View File

@@ -0,0 +1,87 @@
app-id: com.karmaa.termix
runtime: org.freedesktop.Platform
runtime-version: "24.08"
sdk: org.freedesktop.Sdk
base: org.electronjs.Electron2.BaseApp
base-version: "24.08"
command: run.sh
separate-locales: false
finish-args:
- --socket=x11
- --socket=wayland
- --socket=pulseaudio
- --share=network
- --share=ipc
- --device=dri
- --filesystem=home
- --socket=ssh-auth
- --socket=session-bus
- --talk-name=org.freedesktop.secrets
- --env=ELECTRON_TRASH=gio
- --env=XCURSOR_PATH=/run/host/user-share/icons:/run/host/share/icons
- --env=ELECTRON_OZONE_PLATFORM_HINT=auto
modules:
- name: termix
buildsystem: simple
build-commands:
- chmod +x termix.AppImage
- ./termix.AppImage --appimage-extract
- install -Dm755 squashfs-root/termix /app/bin/termix
- cp -r squashfs-root/resources /app/bin/
- cp -r squashfs-root/locales /app/bin/ || true
- cp squashfs-root/*.so /app/bin/ || true
- cp squashfs-root/*.pak /app/bin/ || true
- cp squashfs-root/*.bin /app/bin/ || true
- cp squashfs-root/*.dat /app/bin/ || true
- cp squashfs-root/*.json /app/bin/ || true
- |
cat > run.sh << 'EOF'
#!/bin/bash
export TMPDIR="$XDG_RUNTIME_DIR/app/$FLATPAK_ID"
exec zypak-wrapper /app/bin/termix "$@"
EOF
- chmod +x run.sh
- install -Dm755 run.sh /app/bin/run.sh
- install -Dm644 com.karmaa.termix.desktop /app/share/applications/com.karmaa.termix.desktop
- install -Dm644 com.karmaa.termix.metainfo.xml /app/share/metainfo/com.karmaa.termix.metainfo.xml
- install -Dm644 com.karmaa.termix.svg /app/share/icons/hicolor/scalable/apps/com.karmaa.termix.svg
- install -Dm644 icon-256.png /app/share/icons/hicolor/256x256/apps/com.karmaa.termix.png || true
- install -Dm644 icon-128.png /app/share/icons/hicolor/128x128/apps/com.karmaa.termix.png || true
sources:
- type: file
url: https://github.com/Termix-SSH/Termix/releases/download/release-VERSION_PLACEHOLDER-tag/termix_linux_x64_appimage.AppImage
sha256: CHECKSUM_X64_PLACEHOLDER
dest-filename: termix.AppImage
only-arches:
- x86_64
- type: file
url: https://github.com/Termix-SSH/Termix/releases/download/release-VERSION_PLACEHOLDER-tag/termix_linux_arm64_appimage.AppImage
sha256: CHECKSUM_ARM64_PLACEHOLDER
dest-filename: termix.AppImage
only-arches:
- aarch64
- type: file
path: com.karmaa.termix.desktop
- type: file
path: com.karmaa.termix.metainfo.xml
- type: file
path: com.karmaa.termix.svg
- type: file
path: icon-256.png
- type: file
path: icon-128.png

5
flatpak/flathub.json Normal file
View File

@@ -0,0 +1,5 @@
{
"only-arches": ["x86_64", "aarch64"],
"skip-icons-check": false,
"skip-appstream-check": false
}

View File

@@ -5,6 +5,36 @@
<link rel="icon" type="image/svg+xml" href="/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Termix</title>
<style>
.hide-scrollbar {
scrollbar-width: none;
-ms-overflow-style: none;
}
.hide-scrollbar::-webkit-scrollbar {
display: none;
}
.skinny-scrollbar {
scrollbar-width: thin;
scrollbar-color: #4a4a4a #1e1e21;
}
.skinny-scrollbar::-webkit-scrollbar {
width: 6px;
height: 6px;
}
.skinny-scrollbar::-webkit-scrollbar-track {
background: #1e1e21;
}
.skinny-scrollbar::-webkit-scrollbar-thumb {
background-color: #4a4a4a;
border-radius: 3px;
border: 1px solid #1e1e21;
}
</style>
</head>
<body>
<div id="root"></div>

6932
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +1,30 @@
{
"name": "termix",
"private": true,
"version": "1.7.2",
"version": "1.10.0",
"description": "A web-based server management platform with SSH terminal, tunneling, and file editing capabilities",
"author": "Karmaa",
"main": "electron/main.cjs",
"type": "module",
"scripts": {
"clean": "npx prettier . --write",
"format": "prettier --write .",
"format:check": "prettier --check .",
"lint": "eslint .",
"lint:fix": "eslint --fix .",
"type-check": "tsc --noEmit",
"dev": "vite",
"build": "vite build && tsc -p tsconfig.node.json",
"build:backend": "tsc -p tsconfig.node.json",
"dev:backend": "tsc -p tsconfig.node.json && node ./dist/backend/backend/starter.js",
"preview": "vite preview",
"electron:dev": "concurrently \"npm run dev\" \"wait-on http://localhost:5173 && electron .\"",
"electron:dev": "concurrently \"npm run dev\" \"powershell -c \\\"Start-Sleep -Seconds 5\\\" && electron .\"",
"build:win-portable": "npm run build && electron-builder --win --dir",
"build:win-installer": "npm run build && electron-builder --win --publish=never",
"build:linux-portable": "npm run build && electron-builder --linux --dir",
"build:linux-appimage": "npm run build && electron-builder --linux AppImage",
"build:linux-targz": "npm run build && electron-builder --linux tar.gz",
"test:encryption": "tsc -p tsconfig.node.json && node ./dist/backend/backend/utils/encryption-test.js",
"migrate:encryption": "tsc -p tsconfig.node.json && node ./dist/backend/backend/utils/encryption-migration.js"
"build:mac": "npm run build && electron-builder --mac --universal"
},
"dependencies": {
"@codemirror/autocomplete": "^6.18.7",
@@ -31,6 +35,7 @@
"@hookform/resolvers": "^5.1.1",
"@monaco-editor/react": "^4.7.0",
"@radix-ui/react-accordion": "^1.2.11",
"@radix-ui/react-alert-dialog": "^1.1.15",
"@radix-ui/react-checkbox": "^1.3.2",
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-dropdown-menu": "^2.1.15",
@@ -40,11 +45,12 @@
"@radix-ui/react-scroll-area": "^1.2.9",
"@radix-ui/react-select": "^2.2.5",
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slot": "^1.2.3",
"@radix-ui/react-slider": "^1.3.6",
"@radix-ui/react-slot": "^1.2.4",
"@radix-ui/react-switch": "^1.2.5",
"@radix-ui/react-tabs": "^1.1.12",
"@radix-ui/react-tooltip": "^1.2.8",
"@tailwindcss/vite": "^4.1.11",
"@tailwindcss/vite": "^4.1.14",
"@types/bcryptjs": "^2.4.6",
"@types/cookie-parser": "^1.4.9",
"@types/jszip": "^3.4.0",
@@ -52,6 +58,7 @@
"@types/qrcode": "^1.5.5",
"@types/speakeasy": "^2.0.10",
"@uiw/codemirror-extensions-langs": "^4.24.1",
"@uiw/codemirror-theme-github": "^4.25.4",
"@uiw/react-codemirror": "^4.24.1",
"@xterm/addon-clipboard": "^0.1.0",
"@xterm/addon-fit": "^0.10.0",
@@ -65,11 +72,13 @@
"chalk": "^4.1.2",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"cmdk": "^1.1.1",
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
"dotenv": "^17.2.0",
"drizzle-orm": "^0.44.3",
"express": "^5.1.0",
"i18n-auto-translation": "^2.2.3",
"i18next": "^25.4.2",
"i18next-browser-languagedetector": "^8.2.0",
"jose": "^5.2.3",
@@ -95,16 +104,22 @@
"react-simple-keyboard": "^3.8.120",
"react-syntax-highlighter": "^15.6.6",
"react-xtermjs": "^1.0.10",
"recharts": "^3.2.1",
"remark-gfm": "^4.0.1",
"socks": "^2.8.7",
"sonner": "^2.0.7",
"speakeasy": "^2.0.0",
"ssh2": "^1.16.0",
"tailwind-merge": "^3.3.1",
"tailwindcss": "^4.1.14",
"wait-on": "^9.0.1",
"ws": "^8.18.3",
"zod": "^4.0.5"
},
"devDependencies": {
"@commitlint/cli": "^20.1.0",
"@commitlint/config-conventional": "^20.0.0",
"@electron/notarize": "^2.5.0",
"@eslint/js": "^9.34.0",
"@types/better-sqlite3": "^7.6.13",
"@types/cors": "^2.8.19",
@@ -115,7 +130,7 @@
"@types/react-dom": "^19.1.6",
"@types/ssh2": "^1.15.5",
"@types/ws": "^8.18.1",
"@vitejs/plugin-react-swc": "^3.10.2",
"@vitejs/plugin-react": "^4.3.4",
"concurrently": "^9.2.1",
"electron": "^38.0.0",
"electron-builder": "^26.0.12",
@@ -123,9 +138,19 @@
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.20",
"globals": "^16.3.0",
"husky": "^9.1.7",
"lint-staged": "^16.2.3",
"prettier": "3.6.2",
"typescript": "~5.9.2",
"typescript-eslint": "^8.40.0",
"vite": "^7.1.5"
},
"lint-staged": {
"*.{js,jsx,ts,tsx}": [
"prettier --write"
],
"*.{json,css,md}": [
"prettier --write"
]
}
}

BIN
public/icon-mac.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 776 KiB

After

Width:  |  Height:  |  Size: 685 KiB

BIN
repo-images/Image 10.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 309 KiB

After

Width:  |  Height:  |  Size: 598 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 780 KiB

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 305 KiB

After

Width:  |  Height:  |  Size: 432 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 360 KiB

After

Width:  |  Height:  |  Size: 307 KiB

BIN
repo-images/Image 8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

BIN
repo-images/Image 9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

265
src/backend/dashboard.ts Normal file
View File

@@ -0,0 +1,265 @@
import express from "express";
import cors from "cors";
import cookieParser from "cookie-parser";
import { getDb } from "./database/db/index.js";
import { recentActivity, sshData, hostAccess } from "./database/db/schema.js";
import { eq, and, desc, or } from "drizzle-orm";
import { dashboardLogger } from "./utils/logger.js";
import { SimpleDBOps } from "./utils/simple-db-ops.js";
import { AuthManager } from "./utils/auth-manager.js";
import type { AuthenticatedRequest } from "../types/index.js";
const app = express();
const authManager = AuthManager.getInstance();
const serverStartTime = Date.now();
const activityRateLimiter = new Map<string, number>();
const RATE_LIMIT_MS = 1000;
app.use(
cors({
origin: (origin, callback) => {
if (!origin) return callback(null, true);
const allowedOrigins = [
"http://localhost:5173",
"http://localhost:3000",
"http://127.0.0.1:5173",
"http://127.0.0.1:3000",
];
if (allowedOrigins.includes(origin)) {
return callback(null, true);
}
if (origin.startsWith("https://")) {
return callback(null, true);
}
if (origin.startsWith("http://")) {
return callback(null, true);
}
callback(new Error("Not allowed by CORS"));
},
credentials: true,
methods: ["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"],
allowedHeaders: [
"Content-Type",
"Authorization",
"User-Agent",
"X-Electron-App",
],
}),
);
app.use(cookieParser());
app.use(express.json({ limit: "1mb" }));
app.use(authManager.createAuthMiddleware());
app.get("/uptime", async (req, res) => {
try {
const uptimeMs = Date.now() - serverStartTime;
const uptimeSeconds = Math.floor(uptimeMs / 1000);
const days = Math.floor(uptimeSeconds / 86400);
const hours = Math.floor((uptimeSeconds % 86400) / 3600);
const minutes = Math.floor((uptimeSeconds % 3600) / 60);
res.json({
uptimeMs,
uptimeSeconds,
formatted: `${days}d ${hours}h ${minutes}m`,
});
} catch (err) {
dashboardLogger.error("Failed to get uptime", err);
res.status(500).json({ error: "Failed to get uptime" });
}
});
app.get("/activity/recent", async (req, res) => {
try {
const userId = (req as AuthenticatedRequest).userId;
if (!SimpleDBOps.isUserDataUnlocked(userId)) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
const limit = Number(req.query.limit) || 20;
const activities = await SimpleDBOps.select(
getDb()
.select()
.from(recentActivity)
.where(eq(recentActivity.userId, userId))
.orderBy(desc(recentActivity.timestamp))
.limit(limit),
"recent_activity",
userId,
);
res.json(activities);
} catch (err) {
dashboardLogger.error("Failed to get recent activity", err);
res.status(500).json({ error: "Failed to get recent activity" });
}
});
app.post("/activity/log", async (req, res) => {
try {
const userId = (req as AuthenticatedRequest).userId;
if (!SimpleDBOps.isUserDataUnlocked(userId)) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
const { type, hostId, hostName } = req.body;
if (!type || !hostId || !hostName) {
return res.status(400).json({
error: "Missing required fields: type, hostId, hostName",
});
}
if (
![
"terminal",
"file_manager",
"server_stats",
"tunnel",
"docker",
].includes(type)
) {
return res.status(400).json({
error:
"Invalid activity type. Must be 'terminal', 'file_manager', 'server_stats', 'tunnel', or 'docker'",
});
}
const rateLimitKey = `${userId}:${hostId}:${type}`;
const now = Date.now();
const lastLogged = activityRateLimiter.get(rateLimitKey);
if (lastLogged && now - lastLogged < RATE_LIMIT_MS) {
return res.json({
message: "Activity already logged recently (rate limited)",
});
}
activityRateLimiter.set(rateLimitKey, now);
if (activityRateLimiter.size > 10000) {
const entriesToDelete: string[] = [];
for (const [key, timestamp] of activityRateLimiter.entries()) {
if (now - timestamp > RATE_LIMIT_MS * 2) {
entriesToDelete.push(key);
}
}
entriesToDelete.forEach((key) => activityRateLimiter.delete(key));
}
const ownedHosts = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (ownedHosts.length === 0) {
const sharedHosts = await getDb()
.select()
.from(hostAccess)
.where(
and(eq(hostAccess.hostId, hostId), eq(hostAccess.userId, userId)),
);
if (sharedHosts.length === 0) {
return res
.status(404)
.json({ error: "Host not found or access denied" });
}
}
const result = (await SimpleDBOps.insert(
recentActivity,
"recent_activity",
{
userId,
type,
hostId,
hostName,
},
userId,
)) as unknown as { id: number };
const allActivities = await SimpleDBOps.select(
getDb()
.select()
.from(recentActivity)
.where(eq(recentActivity.userId, userId))
.orderBy(desc(recentActivity.timestamp)),
"recent_activity",
userId,
);
if (allActivities.length > 100) {
const toDelete = allActivities.slice(100);
for (const activity of toDelete) {
await SimpleDBOps.delete(recentActivity, "recent_activity", userId);
}
}
res.json({ message: "Activity logged", id: result.id });
} catch (err) {
dashboardLogger.error("Failed to log activity", err);
res.status(500).json({ error: "Failed to log activity" });
}
});
app.delete("/activity/reset", async (req, res) => {
try {
const userId = (req as AuthenticatedRequest).userId;
if (!SimpleDBOps.isUserDataUnlocked(userId)) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
await SimpleDBOps.delete(
recentActivity,
"recent_activity",
eq(recentActivity.userId, userId),
);
dashboardLogger.success("Recent activity cleared", {
operation: "reset_recent_activity",
userId,
});
res.json({ message: "Recent activity cleared" });
} catch (err) {
dashboardLogger.error("Failed to reset activity", err);
res.status(500).json({ error: "Failed to reset activity" });
}
});
const PORT = 30006;
app.listen(PORT, async () => {
try {
await authManager.initialize();
} catch (err) {
dashboardLogger.error("Failed to initialize AuthManager", err, {
operation: "auth_init_error",
});
}
});

View File

@@ -6,6 +6,9 @@ import userRoutes from "./routes/users.js";
import sshRoutes from "./routes/ssh.js";
import alertRoutes from "./routes/alerts.js";
import credentialsRoutes from "./routes/credentials.js";
import snippetsRoutes from "./routes/snippets.js";
import terminalRoutes from "./routes/terminal.js";
import rbacRoutes from "./routes/rbac.js";
import cors from "cors";
import fetch from "node-fetch";
import fs from "fs";
@@ -20,6 +23,7 @@ import { DatabaseMigration } from "../utils/database-migration.js";
import { UserDataExport } from "../utils/user-data-export.js";
import { AutoSSLSetup } from "../utils/auto-ssl-setup.js";
import { eq, and } from "drizzle-orm";
import { parseUserAgent } from "../utils/user-agent-parser.js";
import {
users,
sshData,
@@ -31,6 +35,12 @@ import {
sshCredentialUsage,
settings,
} from "./db/schema.js";
import type {
CacheEntry,
GitHubRelease,
GitHubAPIResponse,
AuthenticatedRequest,
} from "../../types/index.js";
import { getDb } from "./db/index.js";
import Database from "better-sqlite3";
@@ -53,6 +63,10 @@ app.use(
"http://127.0.0.1:3000",
];
if (allowedOrigins.includes(origin)) {
return callback(null, true);
}
if (origin.startsWith("https://")) {
return callback(null, true);
}
@@ -61,10 +75,6 @@ app.use(
return callback(null, true);
}
if (allowedOrigins.includes(origin)) {
return callback(null, true);
}
callback(new Error("Not allowed by CORS"));
},
credentials: true,
@@ -74,6 +84,8 @@ app.use(
"Authorization",
"User-Agent",
"X-Electron-App",
"Accept",
"Origin",
],
}),
);
@@ -105,17 +117,11 @@ const upload = multer({
},
});
interface CacheEntry {
data: any;
timestamp: number;
expiresAt: number;
}
class GitHubCache {
private cache: Map<string, CacheEntry> = new Map();
private readonly CACHE_DURATION = 30 * 60 * 1000;
set(key: string, data: any): void {
set<T>(key: string, data: T): void {
const now = Date.now();
this.cache.set(key, {
data,
@@ -124,7 +130,7 @@ class GitHubCache {
});
}
get(key: string): any | null {
get<T>(key: string): T | null {
const entry = this.cache.get(key);
if (!entry) {
return null;
@@ -135,44 +141,26 @@ class GitHubCache {
return null;
}
return entry.data;
return entry.data as T;
}
}
const githubCache = new GitHubCache();
const GITHUB_API_BASE = "https://api.github.com";
const REPO_OWNER = "LukeGus";
const REPO_OWNER = "Termix-SSH";
const REPO_NAME = "Termix";
interface GitHubRelease {
id: number;
tag_name: string;
name: string;
body: string;
published_at: string;
html_url: string;
assets: Array<{
id: number;
name: string;
size: number;
download_count: number;
browser_download_url: string;
}>;
prerelease: boolean;
draft: boolean;
}
async function fetchGitHubAPI(
async function fetchGitHubAPI<T>(
endpoint: string,
cacheKey: string,
): Promise<any> {
const cachedData = githubCache.get(cacheKey);
if (cachedData) {
): Promise<GitHubAPIResponse<T>> {
const cachedEntry = githubCache.get<CacheEntry<T>>(cacheKey);
if (cachedEntry) {
return {
data: cachedData,
data: cachedEntry.data,
cached: true,
cache_age: Date.now() - cachedData.timestamp,
cache_age: Date.now() - cachedEntry.timestamp,
};
}
@@ -191,8 +179,13 @@ async function fetchGitHubAPI(
);
}
const data = await response.json();
githubCache.set(cacheKey, data);
const data = (await response.json()) as T;
const cacheData: CacheEntry<T> = {
data,
timestamp: Date.now(),
expiresAt: Date.now() + 30 * 60 * 1000,
};
githubCache.set(cacheKey, cacheData);
return {
data: data,
@@ -257,7 +250,7 @@ app.get("/version", authenticateJWT, async (req, res) => {
localVersion = foundVersion;
break;
}
} catch (error) {
} catch {
continue;
}
}
@@ -272,7 +265,7 @@ app.get("/version", authenticateJWT, async (req, res) => {
try {
const cacheKey = "latest_release";
const releaseData = await fetchGitHubAPI(
const releaseData = await fetchGitHubAPI<GitHubRelease>(
`/repos/${REPO_OWNER}/${REPO_NAME}/releases/latest`,
cacheKey,
);
@@ -323,12 +316,12 @@ app.get("/releases/rss", authenticateJWT, async (req, res) => {
);
const cacheKey = `releases_rss_${page}_${per_page}`;
const releasesData = await fetchGitHubAPI(
const releasesData = await fetchGitHubAPI<GitHubRelease[]>(
`/repos/${REPO_OWNER}/${REPO_NAME}/releases?page=${page}&per_page=${per_page}`,
cacheKey,
);
const rssItems = releasesData.data.map((release: GitHubRelease) => ({
const rssItems = releasesData.data.map((release) => ({
id: release.id,
title: release.name || release.tag_name,
description: release.body,
@@ -372,7 +365,6 @@ app.get("/releases/rss", authenticateJWT, async (req, res) => {
app.get("/encryption/status", requireAdmin, async (req, res) => {
try {
const authManager = AuthManager.getInstance();
const securityStatus = {
initialized: true,
system: { hasSecret: true, isValid: true },
@@ -417,8 +409,6 @@ app.post("/encryption/initialize", requireAdmin, async (req, res) => {
app.post("/encryption/regenerate", requireAdmin, async (req, res) => {
try {
const authManager = AuthManager.getInstance();
apiLogger.warn("System JWT secret regenerated via API", {
operation: "jwt_regenerate_api",
});
@@ -440,8 +430,6 @@ app.post("/encryption/regenerate", requireAdmin, async (req, res) => {
app.post("/encryption/regenerate-jwt", requireAdmin, async (req, res) => {
try {
const authManager = AuthManager.getInstance();
apiLogger.warn("JWT secret regenerated via API", {
operation: "jwt_secret_regenerate_api",
});
@@ -462,7 +450,7 @@ app.post("/encryption/regenerate-jwt", requireAdmin, async (req, res) => {
app.post("/database/export", authenticateJWT, async (req, res) => {
try {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { password } = req.body;
if (!password) {
@@ -471,8 +459,12 @@ app.post("/database/export", authenticateJWT, async (req, res) => {
code: "PASSWORD_REQUIRED",
});
}
const unlocked = await authManager.authenticateUser(userId, password);
const deviceInfo = parseUserAgent(req);
const unlocked = await authManager.authenticateUser(
userId,
password,
deviceInfo.type,
);
if (!unlocked) {
return res.status(401).json({ error: "Invalid password" });
}
@@ -695,7 +687,7 @@ app.post("/database/export", authenticateJWT, async (req, res) => {
decrypted.authType,
decrypted.password || null,
decrypted.key || null,
decrypted.keyPassword || null,
decrypted.key_password || null,
decrypted.keyType || null,
decrypted.autostartPassword || null,
decrypted.autostartKey || null,
@@ -738,9 +730,9 @@ app.post("/database/export", authenticateJWT, async (req, res) => {
decrypted.username,
decrypted.password || null,
decrypted.key || null,
decrypted.privateKey || null,
decrypted.publicKey || null,
decrypted.keyPassword || null,
decrypted.private_key || null,
decrypted.public_key || null,
decrypted.key_password || null,
decrypted.keyType || null,
decrypted.detectedKeyType || null,
decrypted.usageCount || 0,
@@ -916,19 +908,48 @@ app.post(
return res.status(400).json({ error: "No file uploaded" });
}
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { password } = req.body;
const mainDb = getDb();
const deviceInfo = parseUserAgent(req);
if (!password) {
return res.status(400).json({
error: "Password required for import",
code: "PASSWORD_REQUIRED",
});
const userRecords = await mainDb
.select()
.from(users)
.where(eq(users.id, userId));
if (!userRecords || userRecords.length === 0) {
return res.status(404).json({ error: "User not found" });
}
const unlocked = await authManager.authenticateUser(userId, password);
if (!unlocked) {
return res.status(401).json({ error: "Invalid password" });
const isOidcUser = !!userRecords[0].is_oidc;
if (!isOidcUser) {
if (!password) {
return res.status(400).json({
error: "Password required for import",
code: "PASSWORD_REQUIRED",
});
}
const unlocked = await authManager.authenticateUser(
userId,
password,
deviceInfo.type,
);
if (!unlocked) {
return res.status(401).json({ error: "Invalid password" });
}
} else if (!DataCrypto.getUserDataKey(userId)) {
const oidcUnlocked = await authManager.authenticateOIDCUser(
userId,
deviceInfo.type,
);
if (!oidcUnlocked) {
return res.status(403).json({
error: "Failed to unlock user data with SSO credentials",
});
}
}
apiLogger.info("Importing SQLite data", {
@@ -939,7 +960,16 @@ app.post(
mimetype: req.file.mimetype,
});
const userDataKey = DataCrypto.getUserDataKey(userId);
let userDataKey = DataCrypto.getUserDataKey(userId);
if (!userDataKey && isOidcUser) {
const oidcUnlocked = await authManager.authenticateOIDCUser(
userId,
deviceInfo.type,
);
if (oidcUnlocked) {
userDataKey = DataCrypto.getUserDataKey(userId);
}
}
if (!userDataKey) {
throw new Error("User data not unlocked");
}
@@ -968,7 +998,7 @@ app.post(
try {
importDb = new Database(req.file.path, { readonly: true });
const tables = importDb
importDb
.prepare("SELECT name FROM sqlite_master WHERE type='table'")
.all();
} catch (sqliteError) {
@@ -993,8 +1023,6 @@ app.post(
};
try {
const mainDb = getDb();
try {
const importedHosts = importDb
.prepare("SELECT * FROM ssh_data")
@@ -1059,7 +1087,7 @@ app.post(
);
}
}
} catch (tableError) {
} catch {
apiLogger.info("ssh_data table not found in import file, skipping");
}
@@ -1120,7 +1148,7 @@ app.post(
);
}
}
} catch (tableError) {
} catch {
apiLogger.info(
"ssh_credentials table not found in import file, skipping",
);
@@ -1191,7 +1219,7 @@ app.post(
);
}
}
} catch (tableError) {
} catch {
apiLogger.info(`${table} table not found in import file, skipping`);
}
}
@@ -1229,7 +1257,7 @@ app.post(
);
}
}
} catch (tableError) {
} catch {
apiLogger.info(
"dismissed_alerts table not found in import file, skipping",
);
@@ -1270,7 +1298,7 @@ app.post(
);
}
}
} catch (tableError) {
} catch {
apiLogger.info("settings table not found in import file, skipping");
}
} else {
@@ -1288,7 +1316,7 @@ app.post(
try {
fs.unlinkSync(req.file.path);
} catch (cleanupError) {
} catch {
apiLogger.warn("Failed to clean up uploaded file", {
operation: "file_cleanup_warning",
filePath: req.file.path,
@@ -1314,7 +1342,7 @@ app.post(
if (req.file?.path && fs.existsSync(req.file.path)) {
try {
fs.unlinkSync(req.file.path);
} catch (cleanupError) {
} catch {
apiLogger.warn("Failed to clean up uploaded file after error", {
operation: "file_cleanup_error",
filePath: req.file.path,
@@ -1324,7 +1352,7 @@ app.post(
apiLogger.error("SQLite import failed", error, {
operation: "sqlite_import_api_failed",
userId: (req as any).userId,
userId: (req as AuthenticatedRequest).userId,
});
res.status(500).json({
error: "Failed to import SQLite data",
@@ -1336,12 +1364,8 @@ app.post(
app.post("/database/export/preview", authenticateJWT, async (req, res) => {
try {
const userId = (req as any).userId;
const {
format = "encrypted",
scope = "user_data",
includeCredentials = true,
} = req.body;
const userId = (req as AuthenticatedRequest).userId;
const { scope = "user_data", includeCredentials = true } = req.body;
const exportData = await UserDataExport.exportUserData(userId, {
format: "encrypted",
@@ -1411,13 +1435,16 @@ app.use("/users", userRoutes);
app.use("/ssh", sshRoutes);
app.use("/alerts", alertRoutes);
app.use("/credentials", credentialsRoutes);
app.use("/snippets", snippetsRoutes);
app.use("/terminal", terminalRoutes);
app.use("/rbac", rbacRoutes);
app.use(
(
err: unknown,
req: express.Request,
res: express.Response,
next: express.NextFunction,
_next: express.NextFunction,
) => {
apiLogger.error("Unhandled error in request", err, {
operation: "error_handler",
@@ -1430,7 +1457,6 @@ app.use(
);
const HTTP_PORT = 30001;
const HTTPS_PORT = process.env.SSL_PORT || 8443;
async function initializeSecurity() {
try {
@@ -1443,13 +1469,6 @@ async function initializeSecurity() {
if (!isValid) {
throw new Error("Security system validation failed");
}
const securityStatus = {
initialized: true,
system: { hasSecret: true, isValid: true },
activeSessions: {},
activeSessionCount: 0,
};
} catch (error) {
databaseLogger.error("Failed to initialize security system", error, {
operation: "security_init_error",

View File

@@ -12,10 +12,6 @@ import { DatabaseSaveTrigger } from "../../utils/database-save-trigger.js";
const dataDir = process.env.DATA_DIR || "./db/data";
const dbDir = path.resolve(dataDir);
if (!fs.existsSync(dbDir)) {
databaseLogger.info(`Creating database directory`, {
operation: "db_init",
path: dbDir,
});
fs.mkdirSync(dbDir, { recursive: true });
}
@@ -23,7 +19,7 @@ const enableFileEncryption = process.env.DB_FILE_ENCRYPTION !== "false";
const dbPath = path.join(dataDir, "db.sqlite");
const encryptedDbPath = `${dbPath}.encrypted`;
let actualDbPath = ":memory:";
const actualDbPath = ":memory:";
let memoryDatabase: Database.Database;
let isNewDatabase = false;
let sqlite: Database.Database;
@@ -31,7 +27,7 @@ let sqlite: Database.Database;
async function initializeDatabaseAsync(): Promise<void> {
const systemCrypto = SystemCrypto.getInstance();
const dbKey = await systemCrypto.getDatabaseKey();
await systemCrypto.getDatabaseKey();
if (enableFileEncryption) {
try {
if (DatabaseFileEncryption.isEncryptedDatabaseFile(encryptedDbPath)) {
@@ -39,6 +35,13 @@ async function initializeDatabaseAsync(): Promise<void> {
await DatabaseFileEncryption.decryptDatabaseToBuffer(encryptedDbPath);
memoryDatabase = new Database(decryptedBuffer);
try {
const sessionCount = memoryDatabase
.prepare("SELECT COUNT(*) as count FROM sessions")
.get() as { count: number };
} catch (countError) {
}
} else {
const migration = new DatabaseMigration(dataDir);
const migrationStatus = migration.checkMigrationStatus();
@@ -92,6 +95,26 @@ async function initializeDatabaseAsync(): Promise<void> {
databaseKeyLength: process.env.DATABASE_KEY?.length || 0,
});
try {
const diagnosticInfo =
DatabaseFileEncryption.getDiagnosticInfo(encryptedDbPath);
databaseLogger.error(
"Database encryption diagnostic completed - check logs above for details",
null,
{
operation: "db_encryption_diagnostic_completed",
filesConsistent: diagnosticInfo.validation.filesConsistent,
sizeMismatch: diagnosticInfo.validation.sizeMismatch,
},
);
} catch (diagError) {
databaseLogger.warn("Failed to generate diagnostic information", {
operation: "db_diagnostic_failed",
error:
diagError instanceof Error ? diagError.message : "Unknown error",
});
}
throw new Error(
`Database decryption failed: ${error instanceof Error ? error.message : "Unknown error"}. This prevents data loss.`,
);
@@ -117,6 +140,8 @@ async function initializeCompleteDatabase(): Promise<void> {
sqlite = memoryDatabase;
sqlite.exec("PRAGMA foreign_keys = ON");
db = drizzle(sqlite, { schema });
sqlite.exec(`
@@ -145,6 +170,18 @@ async function initializeCompleteDatabase(): Promise<void> {
value TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
jwt_token TEXT NOT NULL,
device_type TEXT NOT NULL,
device_info TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
expires_at TEXT NOT NULL,
last_active_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
@@ -164,10 +201,24 @@ async function initializeCompleteDatabase(): Promise<void> {
enable_tunnel INTEGER NOT NULL DEFAULT 1,
tunnel_connections TEXT,
enable_file_manager INTEGER NOT NULL DEFAULT 1,
enable_docker INTEGER NOT NULL DEFAULT 0,
default_path TEXT,
autostart_password TEXT,
autostart_key TEXT,
autostart_key_password TEXT,
force_keyboard_interactive TEXT,
stats_config TEXT,
docker_config TEXT,
terminal_config TEXT,
notes TEXT,
use_socks5 INTEGER,
socks5_host TEXT,
socks5_port INTEGER,
socks5_username TEXT,
socks5_password TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_recent (
@@ -177,8 +228,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
last_opened TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_pinned (
@@ -188,8 +239,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
pinned_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS file_manager_shortcuts (
@@ -199,8 +250,8 @@ async function initializeCompleteDatabase(): Promise<void> {
name TEXT NOT NULL,
path TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS dismissed_alerts (
@@ -208,7 +259,7 @@ async function initializeCompleteDatabase(): Promise<void> {
user_id TEXT NOT NULL,
alert_id TEXT NOT NULL,
dismissed_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_credentials (
@@ -228,7 +279,7 @@ async function initializeCompleteDatabase(): Promise<void> {
last_used TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_credential_usage (
@@ -237,13 +288,140 @@ async function initializeCompleteDatabase(): Promise<void> {
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
used_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (credential_id) REFERENCES ssh_credentials (id),
FOREIGN KEY (host_id) REFERENCES ssh_data (id),
FOREIGN KEY (user_id) REFERENCES users (id)
FOREIGN KEY (credential_id) REFERENCES ssh_credentials (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS snippets (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
name TEXT NOT NULL,
content TEXT NOT NULL,
description TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS ssh_folders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
name TEXT NOT NULL,
color TEXT,
icon TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS recent_activity (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
type TEXT NOT NULL,
host_id INTEGER NOT NULL,
host_name TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS command_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
host_id INTEGER NOT NULL,
command TEXT NOT NULL,
executed_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS host_access (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT,
role_id INTEGER,
granted_by TEXT NOT NULL,
permission_level TEXT NOT NULL DEFAULT 'use',
expires_at TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_accessed_at TEXT,
access_count INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
description TEXT,
is_system INTEGER NOT NULL DEFAULT 0,
permissions TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS user_roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
role_id INTEGER NOT NULL,
granted_by TEXT,
granted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(user_id, role_id),
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE SET NULL
);
CREATE TABLE IF NOT EXISTS audit_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
username TEXT NOT NULL,
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
resource_name TEXT,
details TEXT,
ip_address TEXT,
user_agent TEXT,
success INTEGER NOT NULL,
error_message TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS session_recordings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
access_id INTEGER,
started_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
ended_at TEXT,
duration INTEGER,
commands TEXT,
dangerous_actions TEXT,
recording_path TEXT,
terminated_by_owner INTEGER DEFAULT 0,
termination_reason TEXT,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (access_id) REFERENCES host_access (id) ON DELETE SET NULL
);
`);
try {
sqlite.prepare("DELETE FROM sessions").run();
} catch (e) {
databaseLogger.warn("Could not clear sessions on startup", {
operation: "db_init_session_cleanup_failed",
error: e,
});
}
migrateSchema();
try {
@@ -263,6 +441,24 @@ async function initializeCompleteDatabase(): Promise<void> {
error: e,
});
}
try {
const row = sqlite
.prepare("SELECT value FROM settings WHERE key = 'allow_password_login'")
.get();
if (!row) {
sqlite
.prepare(
"INSERT INTO settings (key, value) VALUES ('allow_password_login', 'true')",
)
.run();
}
} catch (e) {
databaseLogger.warn("Could not initialize allow_password_login setting", {
operation: "db_init",
error: e,
});
}
}
const addColumnIfNotExists = (
@@ -273,14 +469,14 @@ const addColumnIfNotExists = (
try {
sqlite
.prepare(
`SELECT ${column}
`SELECT "${column}"
FROM ${table} LIMIT 1`,
)
.get();
} catch (e) {
} catch {
try {
sqlite.exec(`ALTER TABLE ${table}
ADD COLUMN ${column} ${definition};`);
ADD COLUMN "${column}" ${definition};`);
} catch (alterError) {
databaseLogger.warn(`Failed to add column ${column} to ${table}`, {
operation: "schema_migration",
@@ -335,6 +531,7 @@ const migrateSchema = () => {
"INTEGER NOT NULL DEFAULT 1",
);
addColumnIfNotExists("ssh_data", "tunnel_connections", "TEXT");
addColumnIfNotExists("ssh_data", "jump_hosts", "TEXT");
addColumnIfNotExists(
"ssh_data",
"enable_file_manager",
@@ -351,25 +548,422 @@ const migrateSchema = () => {
"updated_at",
"TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP",
);
addColumnIfNotExists("ssh_data", "force_keyboard_interactive", "TEXT");
addColumnIfNotExists("ssh_data", "autostart_password", "TEXT");
addColumnIfNotExists("ssh_data", "autostart_key", "TEXT");
addColumnIfNotExists("ssh_data", "autostart_key_password", "TEXT");
addColumnIfNotExists(
"ssh_data",
"credential_id",
"INTEGER REFERENCES ssh_credentials(id)",
"INTEGER REFERENCES ssh_credentials(id) ON DELETE SET NULL",
);
addColumnIfNotExists(
"ssh_data",
"override_credential_username",
"INTEGER",
);
addColumnIfNotExists("ssh_data", "autostart_password", "TEXT");
addColumnIfNotExists("ssh_data", "autostart_key", "TEXT");
addColumnIfNotExists("ssh_data", "autostart_key_password", "TEXT");
addColumnIfNotExists("ssh_data", "stats_config", "TEXT");
addColumnIfNotExists("ssh_data", "terminal_config", "TEXT");
addColumnIfNotExists("ssh_data", "quick_actions", "TEXT");
addColumnIfNotExists(
"ssh_data",
"enable_docker",
"INTEGER NOT NULL DEFAULT 0",
);
addColumnIfNotExists("ssh_data", "docker_config", "TEXT");
addColumnIfNotExists("ssh_data", "notes", "TEXT");
addColumnIfNotExists("ssh_data", "use_socks5", "INTEGER");
addColumnIfNotExists("ssh_data", "socks5_host", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_port", "INTEGER");
addColumnIfNotExists("ssh_data", "socks5_username", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_password", "TEXT");
addColumnIfNotExists("ssh_data", "socks5_proxy_chain", "TEXT");
addColumnIfNotExists("ssh_credentials", "private_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "public_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "detected_key_type", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_password", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_key", "TEXT");
addColumnIfNotExists("ssh_credentials", "system_key_password", "TEXT");
addColumnIfNotExists("file_manager_recent", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("file_manager_pinned", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("file_manager_shortcuts", "host_id", "INTEGER NOT NULL");
addColumnIfNotExists("snippets", "folder", "TEXT");
addColumnIfNotExists("snippets", "order", "INTEGER NOT NULL DEFAULT 0");
try {
sqlite
.prepare("SELECT id FROM snippet_folders LIMIT 1")
.get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS snippet_folders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
name TEXT NOT NULL,
color TEXT,
icon TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create snippet_folders table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite
.prepare("SELECT id FROM sessions LIMIT 1")
.get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
jwt_token TEXT NOT NULL,
device_type TEXT NOT NULL,
device_info TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
expires_at TEXT NOT NULL,
last_active_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id)
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create sessions table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM host_access LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS host_access (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT,
role_id INTEGER,
granted_by TEXT NOT NULL,
permission_level TEXT NOT NULL DEFAULT 'use',
expires_at TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_accessed_at TEXT,
access_count INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create host_access table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT role_id FROM host_access LIMIT 1").get();
} catch {
try {
sqlite.exec("ALTER TABLE host_access ADD COLUMN role_id INTEGER REFERENCES roles(id) ON DELETE CASCADE");
} catch (alterError) {
databaseLogger.warn("Failed to add role_id column", {
operation: "schema_migration",
error: alterError,
});
}
}
try {
sqlite.prepare("SELECT sudo_password FROM ssh_data LIMIT 1").get();
} catch {
try {
sqlite.exec("ALTER TABLE ssh_data ADD COLUMN sudo_password TEXT");
} catch (alterError) {
databaseLogger.warn("Failed to add sudo_password column", {
operation: "schema_migration",
error: alterError,
});
}
}
try {
sqlite.prepare("SELECT id FROM roles LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
description TEXT,
is_system INTEGER NOT NULL DEFAULT 0,
permissions TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create roles table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM user_roles LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS user_roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
role_id INTEGER NOT NULL,
granted_by TEXT,
granted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(user_id, role_id),
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles (id) ON DELETE CASCADE,
FOREIGN KEY (granted_by) REFERENCES users (id) ON DELETE SET NULL
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create user_roles table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM audit_logs LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS audit_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
username TEXT NOT NULL,
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
resource_name TEXT,
details TEXT,
ip_address TEXT,
user_agent TEXT,
success INTEGER NOT NULL,
error_message TEXT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create audit_logs table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM session_recordings LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS session_recordings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_id INTEGER NOT NULL,
user_id TEXT NOT NULL,
access_id INTEGER,
started_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
ended_at TEXT,
duration INTEGER,
commands TEXT,
dangerous_actions TEXT,
recording_path TEXT,
terminated_by_owner INTEGER DEFAULT 0,
termination_reason TEXT,
FOREIGN KEY (host_id) REFERENCES ssh_data (id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE,
FOREIGN KEY (access_id) REFERENCES host_access (id) ON DELETE SET NULL
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create session_recordings table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
sqlite.prepare("SELECT id FROM shared_credentials LIMIT 1").get();
} catch {
try {
sqlite.exec(`
CREATE TABLE IF NOT EXISTS shared_credentials (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host_access_id INTEGER NOT NULL,
original_credential_id INTEGER NOT NULL,
target_user_id TEXT NOT NULL,
encrypted_username TEXT NOT NULL,
encrypted_auth_type TEXT NOT NULL,
encrypted_password TEXT,
encrypted_key TEXT,
encrypted_key_password TEXT,
encrypted_key_type TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
needs_re_encryption INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (host_access_id) REFERENCES host_access (id) ON DELETE CASCADE,
FOREIGN KEY (original_credential_id) REFERENCES ssh_credentials (id) ON DELETE CASCADE,
FOREIGN KEY (target_user_id) REFERENCES users (id) ON DELETE CASCADE
);
`);
} catch (createError) {
databaseLogger.warn("Failed to create shared_credentials table", {
operation: "schema_migration",
error: createError,
});
}
}
try {
const existingRoles = sqlite.prepare("SELECT name, is_system FROM roles").all() as Array<{ name: string; is_system: number }>;
try {
const validSystemRoles = ['admin', 'user'];
const unwantedRoleNames = ['superAdmin', 'powerUser', 'readonly', 'member'];
let deletedCount = 0;
const deleteByName = sqlite.prepare("DELETE FROM roles WHERE name = ?");
for (const roleName of unwantedRoleNames) {
const result = deleteByName.run(roleName);
if (result.changes > 0) {
deletedCount += result.changes;
}
}
const deleteOldSystemRole = sqlite.prepare("DELETE FROM roles WHERE name = ? AND is_system = 1");
for (const role of existingRoles) {
if (role.is_system === 1 && !validSystemRoles.includes(role.name) && !unwantedRoleNames.includes(role.name)) {
const result = deleteOldSystemRole.run(role.name);
if (result.changes > 0) {
deletedCount += result.changes;
}
}
}
} catch (cleanupError) {
databaseLogger.warn("Failed to clean up old system roles", {
operation: "schema_migration",
error: cleanupError,
});
}
const systemRoles = [
{
name: "admin",
displayName: "rbac.roles.admin",
description: "Administrator with full access",
permissions: null,
},
{
name: "user",
displayName: "rbac.roles.user",
description: "Regular user",
permissions: null,
},
];
for (const role of systemRoles) {
const existingRole = sqlite.prepare("SELECT id FROM roles WHERE name = ?").get(role.name);
if (!existingRole) {
try {
sqlite.prepare(`
INSERT INTO roles (name, display_name, description, is_system, permissions)
VALUES (?, ?, ?, 1, ?)
`).run(role.name, role.displayName, role.description, role.permissions);
} catch (insertError) {
databaseLogger.warn(`Failed to create system role: ${role.name}`, {
operation: "schema_migration",
error: insertError,
});
}
}
}
try {
const adminUsers = sqlite.prepare("SELECT id FROM users WHERE is_admin = 1").all() as { id: string }[];
const normalUsers = sqlite.prepare("SELECT id FROM users WHERE is_admin = 0").all() as { id: string }[];
const adminRole = sqlite.prepare("SELECT id FROM roles WHERE name = 'admin'").get() as { id: number } | undefined;
const userRole = sqlite.prepare("SELECT id FROM roles WHERE name = 'user'").get() as { id: number } | undefined;
if (adminRole) {
const insertUserRole = sqlite.prepare(`
INSERT OR IGNORE INTO user_roles (user_id, role_id, granted_at)
VALUES (?, ?, CURRENT_TIMESTAMP)
`);
for (const admin of adminUsers) {
try {
insertUserRole.run(admin.id, adminRole.id);
} catch (error) {
// Ignore duplicate errors
}
}
}
if (userRole) {
const insertUserRole = sqlite.prepare(`
INSERT OR IGNORE INTO user_roles (user_id, role_id, granted_at)
VALUES (?, ?, CURRENT_TIMESTAMP)
`);
for (const user of normalUsers) {
try {
insertUserRole.run(user.id, userRole.id);
} catch (error) {
// Ignore duplicate errors
}
}
}
} catch (migrationError) {
databaseLogger.warn("Failed to migrate existing users to roles", {
operation: "schema_migration",
error: migrationError,
});
}
} catch (seedError) {
databaseLogger.warn("Failed to seed system roles", {
operation: "schema_migration",
error: seedError,
});
}
databaseLogger.success("Schema migration completed", {
operation: "schema_migration",
});
@@ -385,6 +979,13 @@ async function saveMemoryDatabaseToFile() {
fs.mkdirSync(dataDir, { recursive: true });
}
try {
const sessionCount = memoryDatabase
.prepare("SELECT COUNT(*) as count FROM sessions")
.get() as { count: number };
} catch (countError) {
}
if (enableFileEncryption) {
await DatabaseFileEncryption.encryptDatabaseFromBuffer(
buffer,
@@ -476,21 +1077,25 @@ async function cleanupDatabase() {
for (const file of files) {
try {
fs.unlinkSync(path.join(tempDir, file));
} catch {}
} catch {
}
}
try {
fs.rmdirSync(tempDir);
} catch {}
} catch {
}
}
} catch (error) {}
} catch {
}
}
process.on("exit", () => {
if (sqlite) {
try {
sqlite.close();
} catch {}
} catch {
}
}
});

View File

@@ -30,11 +30,28 @@ export const settings = sqliteTable("settings", {
value: text("value").notNull(),
});
export const sessions = sqliteTable("sessions", {
id: text("id").primaryKey(),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
jwtToken: text("jwt_token").notNull(),
deviceType: text("device_type").notNull(),
deviceInfo: text("device_info").notNull(),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
expiresAt: text("expires_at").notNull(),
lastActiveAt: text("last_active_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const sshData = sqliteTable("ssh_data", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
name: text("name"),
ip: text("ip").notNull(),
port: integer("port").notNull(),
@@ -43,17 +60,22 @@ export const sshData = sqliteTable("ssh_data", {
tags: text("tags"),
pin: integer("pin", { mode: "boolean" }).notNull().default(false),
authType: text("auth_type").notNull(),
forceKeyboardInteractive: text("force_keyboard_interactive"),
password: text("password"),
key: text("key", { length: 8192 }),
key_password: text("key_password"),
keyType: text("key_type"),
sudoPassword: text("sudo_password"),
autostartPassword: text("autostart_password"),
autostartKey: text("autostart_key", { length: 8192 }),
autostartKeyPassword: text("autostart_key_password"),
credentialId: integer("credential_id").references(() => sshCredentials.id),
credentialId: integer("credential_id").references(() => sshCredentials.id, { onDelete: "set null" }),
overrideCredentialUsername: integer("override_credential_username", {
mode: "boolean",
}),
enableTerminal: integer("enable_terminal", { mode: "boolean" })
.notNull()
.default(true),
@@ -61,10 +83,26 @@ export const sshData = sqliteTable("ssh_data", {
.notNull()
.default(true),
tunnelConnections: text("tunnel_connections"),
jumpHosts: text("jump_hosts"),
enableFileManager: integer("enable_file_manager", { mode: "boolean" })
.notNull()
.default(true),
enableDocker: integer("enable_docker", { mode: "boolean" })
.notNull()
.default(false),
defaultPath: text("default_path"),
statsConfig: text("stats_config"),
terminalConfig: text("terminal_config"),
quickActions: text("quick_actions"),
notes: text("notes"),
useSocks5: integer("use_socks5", { mode: "boolean" }),
socks5Host: text("socks5_host"),
socks5Port: integer("socks5_port"),
socks5Username: text("socks5_username"),
socks5Password: text("socks5_password"),
socks5ProxyChain: text("socks5_proxy_chain"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
@@ -77,10 +115,10 @@ export const fileManagerRecent = sqliteTable("file_manager_recent", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
lastOpened: text("last_opened")
@@ -92,10 +130,10 @@ export const fileManagerPinned = sqliteTable("file_manager_pinned", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
pinnedAt: text("pinned_at")
@@ -107,10 +145,10 @@ export const fileManagerShortcuts = sqliteTable("file_manager_shortcuts", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
name: text("name").notNull(),
path: text("path").notNull(),
createdAt: text("created_at")
@@ -122,7 +160,7 @@ export const dismissedAlerts = sqliteTable("dismissed_alerts", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
alertId: text("alert_id").notNull(),
dismissedAt: text("dismissed_at")
.notNull()
@@ -133,7 +171,7 @@ export const sshCredentials = sqliteTable("ssh_credentials", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
description: text("description"),
folder: text("folder"),
@@ -147,6 +185,11 @@ export const sshCredentials = sqliteTable("ssh_credentials", {
key_password: text("key_password"),
keyType: text("key_type"),
detectedKeyType: text("detected_key_type"),
systemPassword: text("system_password"),
systemKey: text("system_key", { length: 16384 }),
systemKeyPassword: text("system_key_password"),
usageCount: integer("usage_count").notNull().default(0),
lastUsed: text("last_used"),
createdAt: text("created_at")
@@ -161,14 +204,246 @@ export const sshCredentialUsage = sqliteTable("ssh_credential_usage", {
id: integer("id").primaryKey({ autoIncrement: true }),
credentialId: integer("credential_id")
.notNull()
.references(() => sshCredentials.id),
.references(() => sshCredentials.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id),
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.notNull()
.references(() => users.id),
.references(() => users.id, { onDelete: "cascade" }),
usedAt: text("used_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const snippets = sqliteTable("snippets", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
content: text("content").notNull(),
description: text("description"),
folder: text("folder"),
order: integer("order").notNull().default(0),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const snippetFolders = sqliteTable("snippet_folders", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
color: text("color"),
icon: text("icon"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const sshFolders = sqliteTable("ssh_folders", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
name: text("name").notNull(),
color: text("color"),
icon: text("icon"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const recentActivity = sqliteTable("recent_activity", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
type: text("type").notNull(),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
hostName: text("host_name"),
timestamp: text("timestamp")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const commandHistory = sqliteTable("command_history", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
command: text("command").notNull(),
executedAt: text("executed_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const hostAccess = sqliteTable("host_access", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.references(() => users.id, { onDelete: "cascade" }),
roleId: integer("role_id")
.references(() => roles.id, { onDelete: "cascade" }),
grantedBy: text("granted_by")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
permissionLevel: text("permission_level")
.notNull()
.default("view"),
expiresAt: text("expires_at"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
lastAccessedAt: text("last_accessed_at"),
accessCount: integer("access_count").notNull().default(0),
});
export const sharedCredentials = sqliteTable("shared_credentials", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostAccessId: integer("host_access_id")
.notNull()
.references(() => hostAccess.id, { onDelete: "cascade" }),
originalCredentialId: integer("original_credential_id")
.notNull()
.references(() => sshCredentials.id, { onDelete: "cascade" }),
targetUserId: text("target_user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
encryptedUsername: text("encrypted_username").notNull(),
encryptedAuthType: text("encrypted_auth_type").notNull(),
encryptedPassword: text("encrypted_password"),
encryptedKey: text("encrypted_key", { length: 16384 }),
encryptedKeyPassword: text("encrypted_key_password"),
encryptedKeyType: text("encrypted_key_type"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
needsReEncryption: integer("needs_re_encryption", { mode: "boolean" })
.notNull()
.default(false),
});
export const roles = sqliteTable("roles", {
id: integer("id").primaryKey({ autoIncrement: true }),
name: text("name").notNull().unique(),
displayName: text("display_name").notNull(),
description: text("description"),
isSystem: integer("is_system", { mode: "boolean" })
.notNull()
.default(false),
permissions: text("permissions"),
createdAt: text("created_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
updatedAt: text("updated_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const userRoles = sqliteTable("user_roles", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
roleId: integer("role_id")
.notNull()
.references(() => roles.id, { onDelete: "cascade" }),
grantedBy: text("granted_by").references(() => users.id, {
onDelete: "set null",
}),
grantedAt: text("granted_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const auditLogs = sqliteTable("audit_logs", {
id: integer("id").primaryKey({ autoIncrement: true }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
username: text("username").notNull(),
action: text("action").notNull(),
resourceType: text("resource_type").notNull(),
resourceId: text("resource_id"),
resourceName: text("resource_name"),
details: text("details"),
ipAddress: text("ip_address"),
userAgent: text("user_agent"),
success: integer("success", { mode: "boolean" }).notNull(),
errorMessage: text("error_message"),
timestamp: text("timestamp")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
});
export const sessionRecordings = sqliteTable("session_recordings", {
id: integer("id").primaryKey({ autoIncrement: true }),
hostId: integer("host_id")
.notNull()
.references(() => sshData.id, { onDelete: "cascade" }),
userId: text("user_id")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
accessId: integer("access_id").references(() => hostAccess.id, {
onDelete: "set null",
}),
startedAt: text("started_at")
.notNull()
.default(sql`CURRENT_TIMESTAMP`),
endedAt: text("ended_at"),
duration: integer("duration"),
commands: text("commands"),
dangerousActions: text("dangerous_actions"),
recordingPath: text("recording_path"),
terminatedByOwner: integer("terminated_by_owner", { mode: "boolean" })
.default(false),
terminationReason: text("termination_reason"),
});

View File

@@ -1,3 +1,8 @@
import type {
AuthenticatedRequest,
CacheEntry,
TermixAlert,
} from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { dismissedAlerts } from "../db/schema.js";
@@ -6,17 +11,11 @@ import fetch from "node-fetch";
import { authLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
interface CacheEntry {
data: any;
timestamp: number;
expiresAt: number;
}
class AlertCache {
private cache: Map<string, CacheEntry> = new Map();
private readonly CACHE_DURATION = 5 * 60 * 1000;
set(key: string, data: any): void {
set<T>(key: string, data: T): void {
const now = Date.now();
this.cache.set(key, {
data,
@@ -25,7 +24,7 @@ class AlertCache {
});
}
get(key: string): any | null {
get<T>(key: string): T | null {
const entry = this.cache.get(key);
if (!entry) {
return null;
@@ -36,31 +35,20 @@ class AlertCache {
return null;
}
return entry.data;
return entry.data as T;
}
}
const alertCache = new AlertCache();
const GITHUB_RAW_BASE = "https://raw.githubusercontent.com";
const REPO_OWNER = "LukeGus";
const REPO_NAME = "Termix-Docs";
const REPO_OWNER = "Termix-SSH";
const REPO_NAME = "Docs";
const ALERTS_FILE = "main/termix-alerts.json";
interface TermixAlert {
id: string;
title: string;
message: string;
expiresAt: string;
priority?: "low" | "medium" | "high" | "critical";
type?: "info" | "warning" | "error" | "success";
actionUrl?: string;
actionText?: string;
}
async function fetchAlertsFromGitHub(): Promise<TermixAlert[]> {
const cacheKey = "termix_alerts";
const cachedData = alertCache.get(cacheKey);
const cachedData = alertCache.get<TermixAlert[]>(cacheKey);
if (cachedData) {
return cachedData;
}
@@ -115,7 +103,7 @@ const authenticateJWT = authManager.createAuthMiddleware();
// GET /alerts
router.get("/", authenticateJWT, async (req, res) => {
try {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const allAlerts = await fetchAlertsFromGitHub();
@@ -148,7 +136,7 @@ router.get("/", authenticateJWT, async (req, res) => {
router.post("/dismiss", authenticateJWT, async (req, res) => {
try {
const { alertId } = req.body;
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!alertId) {
authLogger.warn("Missing alertId in dismiss request", { userId });
@@ -170,7 +158,7 @@ router.post("/dismiss", authenticateJWT, async (req, res) => {
return res.status(409).json({ error: "Alert already dismissed" });
}
const result = await db.insert(dismissedAlerts).values({
await db.insert(dismissedAlerts).values({
userId,
alertId,
});
@@ -186,7 +174,7 @@ router.post("/dismiss", authenticateJWT, async (req, res) => {
// GET /alerts/dismissed/:userId
router.get("/dismissed", authenticateJWT, async (req, res) => {
try {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const dismissedAlertRecords = await db
.select({
@@ -211,7 +199,7 @@ router.get("/dismissed", authenticateJWT, async (req, res) => {
router.delete("/dismiss", authenticateJWT, async (req, res) => {
try {
const { alertId } = req.body;
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!alertId) {
return res.status(400).json({ error: "Alert ID is required" });

View File

@@ -1,16 +1,23 @@
import type {
AuthenticatedRequest,
CredentialBackend,
} from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { sshCredentials, sshCredentialUsage, sshData } from "../db/schema.js";
import {
sshCredentials,
sshCredentialUsage,
sshData,
hostAccess,
} from "../db/schema.js";
import { eq, and, desc, sql } from "drizzle-orm";
import type { Request, Response, NextFunction } from "express";
import jwt from "jsonwebtoken";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { SimpleDBOps } from "../../utils/simple-db-ops.js";
import { AuthManager } from "../../utils/auth-manager.js";
import {
parseSSHKey,
parsePublicKey,
detectKeyType,
validateKeyPair,
} from "../../utils/ssh-key-utils.js";
import crypto from "crypto";
@@ -29,7 +36,11 @@ function generateSSHKeyPair(
} {
try {
let ssh2Type = keyType;
const options: any = {};
const options: {
bits?: number;
passphrase?: string;
cipher?: string;
} = {};
if (keyType === "ssh-rsa") {
ssh2Type = "rsa";
@@ -46,6 +57,7 @@ function generateSSHKeyPair(
options.cipher = "aes128-cbc";
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const keyPair = ssh2Utils.generateKeyPairSync(ssh2Type as any, options);
return {
@@ -64,7 +76,7 @@ function generateSSHKeyPair(
const router = express.Router();
function isNonEmptyString(val: any): val is string {
function isNonEmptyString(val: unknown): val is string {
return typeof val === "string" && val.trim().length > 0;
}
@@ -79,7 +91,7 @@ router.post(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const {
name,
description,
@@ -226,7 +238,7 @@ router.get(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for credential fetch");
@@ -259,7 +271,7 @@ router.get(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for credential folder fetch");
@@ -297,7 +309,7 @@ router.get(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
if (!isNonEmptyString(userId) || !id) {
@@ -328,19 +340,19 @@ router.get(
const output = formatCredentialOutput(credential);
if (credential.password) {
(output as any).password = credential.password;
output.password = credential.password;
}
if (credential.key) {
(output as any).key = credential.key;
output.key = credential.key;
}
if (credential.private_key) {
(output as any).privateKey = credential.private_key;
output.privateKey = credential.private_key;
}
if (credential.public_key) {
(output as any).publicKey = credential.public_key;
output.publicKey = credential.public_key;
}
if (credential.key_password) {
(output as any).keyPassword = credential.key_password;
output.keyPassword = credential.key_password;
}
res.json(output);
@@ -361,7 +373,7 @@ router.put(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
const updateData = req.body;
@@ -385,7 +397,7 @@ router.put(
return res.status(404).json({ error: "Credential not found" });
}
const updateFields: any = {};
const updateFields: Record<string, string | null | undefined> = {};
if (updateData.name !== undefined)
updateFields.name = updateData.name.trim();
@@ -466,6 +478,14 @@ router.put(
userId,
);
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
await sharedCredManager.updateSharedCredentialsForOriginal(
parseInt(id),
userId,
);
const credential = updated[0];
authLogger.success(
`SSH credential updated: ${credential.name} (${credential.authType}) by user ${userId}`,
@@ -497,7 +517,7 @@ router.delete(
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
if (!isNonEmptyString(userId) || !id) {
@@ -546,16 +566,32 @@ router.delete(
eq(sshData.userId, userId),
),
);
for (const host of hostsUsingCredential) {
const revokedShares = await db
.delete(hostAccess)
.where(eq(hostAccess.hostId, host.id))
.returning({ id: hostAccess.id });
if (revokedShares.length > 0) {
authLogger.info(
"Auto-revoked host shares due to credential deletion",
{
operation: "auto_revoke_shares",
hostId: host.id,
credentialId: parseInt(id),
revokedCount: revokedShares.length,
reason: "credential_deleted",
},
);
}
}
}
await db
.delete(sshCredentialUsage)
.where(
and(
eq(sshCredentialUsage.credentialId, parseInt(id)),
eq(sshCredentialUsage.userId, userId),
),
);
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
await sharedCredManager.deleteSharedCredentialsForOriginal(parseInt(id));
await db
.delete(sshCredentials)
@@ -596,7 +632,7 @@ router.post(
"/:id/apply-to-host/:hostId",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { id: credentialId, hostId } = req.params;
if (!isNonEmptyString(userId) || !credentialId || !hostId) {
@@ -629,8 +665,8 @@ router.post(
.update(sshData)
.set({
credentialId: parseInt(credentialId),
username: credential.username,
authType: credential.auth_type || credential.authType,
username: credential.username as string,
authType: (credential.auth_type || credential.authType) as string,
password: null,
key: null,
key_password: null,
@@ -675,7 +711,7 @@ router.get(
"/:id/hosts",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { id: credentialId } = req.params;
if (!isNonEmptyString(userId) || !credentialId) {
@@ -707,7 +743,9 @@ router.get(
},
);
function formatCredentialOutput(credential: any): any {
function formatCredentialOutput(
credential: Record<string, unknown>,
): Record<string, unknown> {
return {
id: credential.id,
name: credential.name,
@@ -731,7 +769,9 @@ function formatCredentialOutput(credential: any): any {
};
}
function formatSSHHostOutput(host: any): any {
function formatSSHHostOutput(
host: Record<string, unknown>,
): Record<string, unknown> {
return {
id: host.id,
userId: host.userId,
@@ -751,7 +791,7 @@ function formatSSHHostOutput(host: any): any {
enableTerminal: !!host.enableTerminal,
enableTunnel: !!host.enableTunnel,
tunnelConnections: host.tunnelConnections
? JSON.parse(host.tunnelConnections)
? JSON.parse(host.tunnelConnections as string)
: [],
enableFileManager: !!host.enableFileManager,
defaultPath: host.defaultPath,
@@ -766,7 +806,7 @@ router.put(
"/folders/rename",
authenticateJWT,
async (req: Request, res: Response) => {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
const { oldName, newName } = req.body;
if (!isNonEmptyString(oldName) || !isNonEmptyString(newName)) {
@@ -970,7 +1010,7 @@ router.post(
try {
let privateKeyObj;
let parseAttempts = [];
const parseAttempts = [];
try {
privateKeyObj = crypto.createPrivateKey({
@@ -1093,7 +1133,9 @@ router.post(
finalPublicKey = `${keyType} ${base64Data}`;
formatType = "ssh";
}
} catch (sshError) {}
} catch {
// Ignore validation errors
}
const response = {
success: true,
@@ -1117,15 +1159,14 @@ router.post(
);
async function deploySSHKeyToHost(
hostConfig: any,
publicKey: string,
credentialData: any,
hostConfig: Record<string, unknown>,
credData: CredentialBackend,
): Promise<{ success: boolean; message?: string; error?: string }> {
const publicKey = credData.public_key as string;
return new Promise((resolve) => {
const conn = new Client();
let connectionTimeout: NodeJS.Timeout;
connectionTimeout = setTimeout(() => {
const connectionTimeout = setTimeout(() => {
conn.destroy();
resolve({ success: false, error: "Connection timeout" });
}, 120000);
@@ -1158,7 +1199,9 @@ async function deploySSHKeyToHost(
}
});
stream.on("data", (data) => {});
stream.on("data", () => {
// Ignore output
});
},
);
});
@@ -1175,7 +1218,9 @@ async function deploySSHKeyToHost(
if (parsed.data) {
actualPublicKey = parsed.data;
}
} catch (e) {}
} catch {
// Ignore parse errors
}
const keyParts = actualPublicKey.trim().split(" ");
if (keyParts.length < 2) {
@@ -1202,7 +1247,7 @@ async function deploySSHKeyToHost(
output += data.toString();
});
stream.on("close", (code) => {
stream.on("close", () => {
clearTimeout(checkTimeout);
const exists = output.trim() === "0";
resolveCheck(exists);
@@ -1229,20 +1274,26 @@ async function deploySSHKeyToHost(
if (parsed.data) {
actualPublicKey = parsed.data;
}
} catch (e) {}
} catch {
// Ignore parse errors
}
const escapedKey = actualPublicKey
.replace(/\\/g, "\\\\")
.replace(/'/g, "'\\''");
conn.exec(
`printf '%s\\n' '${escapedKey}' >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys`,
`printf '%s\\n' '${escapedKey} ${credData.name}@Termix' >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys`,
(err, stream) => {
if (err) {
clearTimeout(addTimeout);
return rejectAdd(err);
}
stream.on("data", () => {
// Consume output
});
stream.on("close", (code) => {
clearTimeout(addTimeout);
if (code === 0) {
@@ -1269,7 +1320,9 @@ async function deploySSHKeyToHost(
if (parsed.data) {
actualPublicKey = parsed.data;
}
} catch (e) {}
} catch {
// Ignore parse errors
}
const keyParts = actualPublicKey.trim().split(" ");
if (keyParts.length < 2) {
@@ -1295,7 +1348,7 @@ async function deploySSHKeyToHost(
output += data.toString();
});
stream.on("close", (code) => {
stream.on("close", () => {
clearTimeout(verifyTimeout);
const verified = output.trim() === "0";
resolveVerify(verified);
@@ -1356,7 +1409,7 @@ async function deploySSHKeyToHost(
});
try {
const connectionConfig: any = {
const connectionConfig: Record<string, unknown> = {
host: hostConfig.ip,
port: hostConfig.port || 22,
username: hostConfig.username,
@@ -1403,14 +1456,15 @@ async function deploySSHKeyToHost(
connectionConfig.password = hostConfig.password;
} else if (hostConfig.authType === "key" && hostConfig.privateKey) {
try {
const privateKey = hostConfig.privateKey as string;
if (
!hostConfig.privateKey.includes("-----BEGIN") ||
!hostConfig.privateKey.includes("-----END")
!privateKey.includes("-----BEGIN") ||
!privateKey.includes("-----END")
) {
throw new Error("Invalid private key format");
}
const cleanKey = hostConfig.privateKey
const cleanKey = privateKey
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
@@ -1465,7 +1519,7 @@ router.post(
}
try {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!userId) {
return res.status(401).json({
success: false,
@@ -1491,7 +1545,7 @@ router.post(
});
}
const credData = credential[0];
const credData = credential[0] as unknown as CredentialBackend;
if (credData.authType !== "key") {
return res.status(400).json({
@@ -1500,7 +1554,8 @@ router.post(
});
}
if (!credData.publicKey) {
const publicKey = credData.public_key;
if (!publicKey) {
return res.status(400).json({
success: false,
error: "Public key is required for deployment",
@@ -1521,7 +1576,7 @@ router.post(
const hostData = targetHost[0];
let hostConfig = {
const hostConfig = {
ip: hostData.ip,
port: hostData.port,
username: hostData.username,
@@ -1532,7 +1587,7 @@ router.post(
};
if (hostData.authType === "credential" && hostData.credentialId) {
const userId = (req as any).userId;
const userId = (req as AuthenticatedRequest).userId;
if (!userId) {
return res.status(400).json({
success: false,
@@ -1546,7 +1601,7 @@ router.post(
db
.select()
.from(sshCredentials)
.where(eq(sshCredentials.id, hostData.credentialId))
.where(eq(sshCredentials.id, hostData.credentialId as number))
.limit(1),
"ssh_credentials",
userId,
@@ -1571,7 +1626,7 @@ router.post(
error: "Host credential not found",
});
}
} catch (error) {
} catch {
return res.status(500).json({
success: false,
error: "Failed to resolve host credentials",
@@ -1579,11 +1634,7 @@ router.post(
}
}
const deployResult = await deploySSHKeyToHost(
hostConfig,
credData.publicKey,
credData,
);
const deployResult = await deploySSHKeyToHost(hostConfig, credData);
if (deployResult.success) {
res.json({

View File

@@ -0,0 +1,850 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import {
hostAccess,
sshData,
users,
roles,
userRoles,
auditLogs,
sharedCredentials,
} from "../db/schema.js";
import { eq, and, desc, sql, or, isNull, gte } from "drizzle-orm";
import type { Request, Response } from "express";
import { databaseLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
import { PermissionManager } from "../../utils/permission-manager.js";
const router = express.Router();
const authManager = AuthManager.getInstance();
const permissionManager = PermissionManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
function isNonEmptyString(value: unknown): value is string {
return typeof value === "string" && value.trim().length > 0;
}
//Share a host with a user or role
//POST /rbac/host/:id/share
router.post(
"/host/:id/share",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const userId = req.userId!;
if (isNaN(hostId)) {
return res.status(400).json({ error: "Invalid host ID" });
}
try {
const {
targetType = "user",
targetUserId,
targetRoleId,
durationHours,
permissionLevel = "view",
} = req.body;
if (!["user", "role"].includes(targetType)) {
return res
.status(400)
.json({ error: "Invalid target type. Must be 'user' or 'role'" });
}
if (targetType === "user" && !isNonEmptyString(targetUserId)) {
return res
.status(400)
.json({ error: "Target user ID is required when sharing with user" });
}
if (targetType === "role" && !targetRoleId) {
return res
.status(400)
.json({ error: "Target role ID is required when sharing with role" });
}
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
databaseLogger.warn("Attempt to share host not owned by user", {
operation: "share_host",
userId,
hostId,
});
return res.status(403).json({ error: "Not host owner" });
}
if (!host[0].credentialId) {
return res.status(400).json({
error:
"Only hosts using credentials can be shared. Please create a credential and assign it to this host before sharing.",
code: "CREDENTIAL_REQUIRED_FOR_SHARING",
});
}
if (targetType === "user") {
const targetUser = await db
.select({ id: users.id, username: users.username })
.from(users)
.where(eq(users.id, targetUserId))
.limit(1);
if (targetUser.length === 0) {
return res.status(404).json({ error: "Target user not found" });
}
} else {
const targetRole = await db
.select({ id: roles.id, name: roles.name })
.from(roles)
.where(eq(roles.id, targetRoleId))
.limit(1);
if (targetRole.length === 0) {
return res.status(404).json({ error: "Target role not found" });
}
}
let expiresAt: string | null = null;
if (
durationHours &&
typeof durationHours === "number" &&
durationHours > 0
) {
const expiryDate = new Date();
expiryDate.setHours(expiryDate.getHours() + durationHours);
expiresAt = expiryDate.toISOString();
}
const validLevels = ["view"];
if (!validLevels.includes(permissionLevel)) {
return res.status(400).json({
error: "Invalid permission level. Only 'view' is supported.",
validLevels,
});
}
const whereConditions = [eq(hostAccess.hostId, hostId)];
if (targetType === "user") {
whereConditions.push(eq(hostAccess.userId, targetUserId));
} else {
whereConditions.push(eq(hostAccess.roleId, targetRoleId));
}
const existing = await db
.select()
.from(hostAccess)
.where(and(...whereConditions))
.limit(1);
if (existing.length > 0) {
await db
.update(hostAccess)
.set({
permissionLevel,
expiresAt,
})
.where(eq(hostAccess.id, existing[0].id));
await db
.delete(sharedCredentials)
.where(eq(sharedCredentials.hostAccessId, existing[0].id));
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
if (targetType === "user") {
await sharedCredManager.createSharedCredentialForUser(
existing[0].id,
host[0].credentialId,
targetUserId!,
userId,
);
} else {
await sharedCredManager.createSharedCredentialsForRole(
existing[0].id,
host[0].credentialId,
targetRoleId!,
userId,
);
}
return res.json({
success: true,
message: "Host access updated",
expiresAt,
});
}
const result = await db.insert(hostAccess).values({
hostId,
userId: targetType === "user" ? targetUserId : null,
roleId: targetType === "role" ? targetRoleId : null,
grantedBy: userId,
permissionLevel,
expiresAt,
});
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
if (targetType === "user") {
await sharedCredManager.createSharedCredentialForUser(
result.lastInsertRowid as number,
host[0].credentialId,
targetUserId!,
userId,
);
} else {
await sharedCredManager.createSharedCredentialsForRole(
result.lastInsertRowid as number,
host[0].credentialId,
targetRoleId!,
userId,
);
}
res.json({
success: true,
message: `Host shared successfully with ${targetType}`,
expiresAt,
});
} catch (error) {
databaseLogger.error("Failed to share host", error, {
operation: "share_host",
hostId,
userId,
});
res.status(500).json({ error: "Failed to share host" });
}
},
);
// Revoke host access
// DELETE /rbac/host/:id/access/:accessId
router.delete(
"/host/:id/access/:accessId",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const accessId = parseInt(req.params.accessId, 10);
const userId = req.userId!;
if (isNaN(hostId) || isNaN(accessId)) {
return res.status(400).json({ error: "Invalid ID" });
}
try {
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
return res.status(403).json({ error: "Not host owner" });
}
await db.delete(hostAccess).where(eq(hostAccess.id, accessId));
res.json({ success: true, message: "Access revoked" });
} catch (error) {
databaseLogger.error("Failed to revoke host access", error, {
operation: "revoke_host_access",
hostId,
accessId,
userId,
});
res.status(500).json({ error: "Failed to revoke access" });
}
},
);
// Get host access list
// GET /rbac/host/:id/access
router.get(
"/host/:id/access",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const hostId = parseInt(req.params.id, 10);
const userId = req.userId!;
if (isNaN(hostId)) {
return res.status(400).json({ error: "Invalid host ID" });
}
try {
const host = await db
.select()
.from(sshData)
.where(and(eq(sshData.id, hostId), eq(sshData.userId, userId)))
.limit(1);
if (host.length === 0) {
return res.status(403).json({ error: "Not host owner" });
}
const rawAccessList = await db
.select({
id: hostAccess.id,
userId: hostAccess.userId,
roleId: hostAccess.roleId,
username: users.username,
roleName: roles.name,
roleDisplayName: roles.displayName,
grantedBy: hostAccess.grantedBy,
grantedByUsername: sql<string>`(SELECT username FROM users WHERE id = ${hostAccess.grantedBy})`,
permissionLevel: hostAccess.permissionLevel,
expiresAt: hostAccess.expiresAt,
createdAt: hostAccess.createdAt,
})
.from(hostAccess)
.leftJoin(users, eq(hostAccess.userId, users.id))
.leftJoin(roles, eq(hostAccess.roleId, roles.id))
.where(eq(hostAccess.hostId, hostId))
.orderBy(desc(hostAccess.createdAt));
const accessList = rawAccessList.map((access) => ({
id: access.id,
targetType: access.userId ? "user" : "role",
userId: access.userId,
roleId: access.roleId,
username: access.username,
roleName: access.roleName,
roleDisplayName: access.roleDisplayName,
grantedBy: access.grantedBy,
grantedByUsername: access.grantedByUsername,
permissionLevel: access.permissionLevel,
expiresAt: access.expiresAt,
createdAt: access.createdAt,
}));
res.json({ accessList });
} catch (error) {
databaseLogger.error("Failed to get host access list", error, {
operation: "get_host_access_list",
hostId,
userId,
});
res.status(500).json({ error: "Failed to get access list" });
}
},
);
// Get user's shared hosts (hosts shared WITH this user)
// GET /rbac/shared-hosts
router.get(
"/shared-hosts",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const userId = req.userId!;
try {
const now = new Date().toISOString();
const sharedHosts = await db
.select({
id: sshData.id,
name: sshData.name,
ip: sshData.ip,
port: sshData.port,
username: sshData.username,
folder: sshData.folder,
tags: sshData.tags,
permissionLevel: hostAccess.permissionLevel,
expiresAt: hostAccess.expiresAt,
grantedBy: hostAccess.grantedBy,
ownerUsername: users.username,
})
.from(hostAccess)
.innerJoin(sshData, eq(hostAccess.hostId, sshData.id))
.innerJoin(users, eq(sshData.userId, users.id))
.where(
and(
eq(hostAccess.userId, userId),
or(isNull(hostAccess.expiresAt), gte(hostAccess.expiresAt, now)),
),
)
.orderBy(desc(hostAccess.createdAt));
res.json({ sharedHosts });
} catch (error) {
databaseLogger.error("Failed to get shared hosts", error, {
operation: "get_shared_hosts",
userId,
});
res.status(500).json({ error: "Failed to get shared hosts" });
}
},
);
// Get all roles
// GET /rbac/roles
router.get(
"/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
try {
const allRoles = await db
.select()
.from(roles)
.orderBy(roles.isSystem, roles.name);
const rolesWithParsedPermissions = allRoles.map((role) => ({
...role,
permissions: JSON.parse(role.permissions),
}));
res.json({ roles: rolesWithParsedPermissions });
} catch (error) {
databaseLogger.error("Failed to get roles", error, {
operation: "get_roles",
});
res.status(500).json({ error: "Failed to get roles" });
}
},
);
// Get all roles
// GET /rbac/roles
router.get(
"/roles",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
try {
const rolesList = await db
.select({
id: roles.id,
name: roles.name,
displayName: roles.displayName,
description: roles.description,
isSystem: roles.isSystem,
createdAt: roles.createdAt,
updatedAt: roles.updatedAt,
})
.from(roles)
.orderBy(roles.isSystem, roles.name);
res.json({ roles: rolesList });
} catch (error) {
databaseLogger.error("Failed to get roles", error, {
operation: "get_roles",
});
res.status(500).json({ error: "Failed to get roles" });
}
},
);
// Create new role
// POST /rbac/roles
router.post(
"/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const { name, displayName, description } = req.body;
if (!isNonEmptyString(name) || !isNonEmptyString(displayName)) {
return res.status(400).json({
error: "Role name and display name are required",
});
}
if (!/^[a-z0-9_-]+$/.test(name)) {
return res.status(400).json({
error:
"Role name must contain only lowercase letters, numbers, underscores, and hyphens",
});
}
try {
const existing = await db
.select({ id: roles.id })
.from(roles)
.where(eq(roles.name, name))
.limit(1);
if (existing.length > 0) {
return res.status(409).json({
error: "A role with this name already exists",
});
}
const result = await db.insert(roles).values({
name,
displayName,
description: description || null,
isSystem: false,
permissions: null,
});
const newRoleId = result.lastInsertRowid;
res.status(201).json({
success: true,
roleId: newRoleId,
message: "Role created successfully",
});
} catch (error) {
databaseLogger.error("Failed to create role", error, {
operation: "create_role",
roleName: name,
});
res.status(500).json({ error: "Failed to create role" });
}
},
);
// Update role
// PUT /rbac/roles/:id
router.put(
"/roles/:id",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const roleId = parseInt(req.params.id, 10);
const { displayName, description } = req.body;
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
if (!displayName && description === undefined) {
return res.status(400).json({
error: "At least one field (displayName or description) is required",
});
}
try {
const existingRole = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (existingRole.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
const updates: {
displayName?: string;
description?: string | null;
updatedAt: string;
} = {
updatedAt: new Date().toISOString(),
};
if (displayName) {
updates.displayName = displayName;
}
if (description !== undefined) {
updates.description = description || null;
}
await db.update(roles).set(updates).where(eq(roles.id, roleId));
res.json({
success: true,
message: "Role updated successfully",
});
} catch (error) {
databaseLogger.error("Failed to update role", error, {
operation: "update_role",
roleId,
});
res.status(500).json({ error: "Failed to update role" });
}
},
);
// Delete role
// DELETE /rbac/roles/:id
router.delete(
"/roles/:id",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const roleId = parseInt(req.params.id, 10);
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
try {
const role = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error: "Cannot delete system roles",
});
}
const deletedUserRoles = await db
.delete(userRoles)
.where(eq(userRoles.roleId, roleId))
.returning({ userId: userRoles.userId });
for (const { userId } of deletedUserRoles) {
permissionManager.invalidateUserPermissionCache(userId);
}
const deletedHostAccess = await db
.delete(hostAccess)
.where(eq(hostAccess.roleId, roleId))
.returning({ id: hostAccess.id });
await db.delete(roles).where(eq(roles.id, roleId));
res.json({
success: true,
message: "Role deleted successfully",
});
} catch (error) {
databaseLogger.error("Failed to delete role", error, {
operation: "delete_role",
roleId,
});
res.status(500).json({ error: "Failed to delete role" });
}
},
);
// Assign role to user
// POST /rbac/users/:userId/roles
router.post(
"/users/:userId/roles",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const currentUserId = req.userId!;
try {
const { roleId } = req.body;
if (typeof roleId !== "number") {
return res.status(400).json({ error: "Role ID is required" });
}
const targetUser = await db
.select()
.from(users)
.where(eq(users.id, targetUserId))
.limit(1);
if (targetUser.length === 0) {
return res.status(404).json({ error: "User not found" });
}
const role = await db
.select()
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error:
"System roles (admin, user) are automatically assigned and cannot be manually assigned",
});
}
const existing = await db
.select()
.from(userRoles)
.where(
and(eq(userRoles.userId, targetUserId), eq(userRoles.roleId, roleId)),
)
.limit(1);
if (existing.length > 0) {
return res.status(409).json({ error: "Role already assigned" });
}
await db.insert(userRoles).values({
userId: targetUserId,
roleId,
grantedBy: currentUserId,
});
const hostsSharedWithRole = await db
.select()
.from(hostAccess)
.innerJoin(sshData, eq(hostAccess.hostId, sshData.id))
.where(eq(hostAccess.roleId, roleId));
const { SharedCredentialManager } =
await import("../../utils/shared-credential-manager.js");
const sharedCredManager = SharedCredentialManager.getInstance();
for (const { host_access, ssh_data } of hostsSharedWithRole) {
if (ssh_data.credentialId) {
try {
await sharedCredManager.createSharedCredentialForUser(
host_access.id,
ssh_data.credentialId,
targetUserId,
ssh_data.userId,
);
} catch (error) {
databaseLogger.error(
"Failed to create shared credential for new role member",
error,
{
operation: "assign_role_create_credentials",
targetUserId,
roleId,
hostId: ssh_data.id,
},
);
}
}
}
permissionManager.invalidateUserPermissionCache(targetUserId);
res.json({
success: true,
message: "Role assigned successfully",
});
} catch (error) {
databaseLogger.error("Failed to assign role", error, {
operation: "assign_role",
targetUserId,
});
res.status(500).json({ error: "Failed to assign role" });
}
},
);
// Remove role from user
// DELETE /rbac/users/:userId/roles/:roleId
router.delete(
"/users/:userId/roles/:roleId",
authenticateJWT,
permissionManager.requireAdmin(),
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const roleId = parseInt(req.params.roleId, 10);
if (isNaN(roleId)) {
return res.status(400).json({ error: "Invalid role ID" });
}
try {
const role = await db
.select({
id: roles.id,
name: roles.name,
isSystem: roles.isSystem,
})
.from(roles)
.where(eq(roles.id, roleId))
.limit(1);
if (role.length === 0) {
return res.status(404).json({ error: "Role not found" });
}
if (role[0].isSystem) {
return res.status(403).json({
error:
"System roles (admin, user) are automatically assigned and cannot be removed",
});
}
await db
.delete(userRoles)
.where(
and(eq(userRoles.userId, targetUserId), eq(userRoles.roleId, roleId)),
);
permissionManager.invalidateUserPermissionCache(targetUserId);
res.json({
success: true,
message: "Role removed successfully",
});
} catch (error) {
databaseLogger.error("Failed to remove role", error, {
operation: "remove_role",
targetUserId,
roleId,
});
res.status(500).json({ error: "Failed to remove role" });
}
},
);
// Get user's roles
// GET /rbac/users/:userId/roles
router.get(
"/users/:userId/roles",
authenticateJWT,
async (req: AuthenticatedRequest, res: Response) => {
const targetUserId = req.params.userId;
const currentUserId = req.userId!;
if (
targetUserId !== currentUserId &&
!(await permissionManager.isAdmin(currentUserId))
) {
return res.status(403).json({ error: "Access denied" });
}
try {
const userRolesList = await db
.select({
id: userRoles.id,
roleId: roles.id,
roleName: roles.name,
roleDisplayName: roles.displayName,
description: roles.description,
isSystem: roles.isSystem,
grantedAt: userRoles.grantedAt,
})
.from(userRoles)
.innerJoin(roles, eq(userRoles.roleId, roles.id))
.where(eq(userRoles.userId, targetUserId));
res.json({ roles: userRolesList });
} catch (error) {
databaseLogger.error("Failed to get user roles", error, {
operation: "get_user_roles",
targetUserId,
});
res.status(500).json({ error: "Failed to get user roles" });
}
},
);
export default router;

View File

@@ -0,0 +1,935 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { snippets, snippetFolders } from "../db/schema.js";
import { eq, and, desc, asc, sql } from "drizzle-orm";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
const router = express.Router();
function isNonEmptyString(val: unknown): val is string {
return typeof val === "string" && val.trim().length > 0;
}
const authManager = AuthManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const requireDataAccess = authManager.createDataAccessMiddleware();
// Get all snippet folders
// GET /snippets/folders
router.get(
"/folders",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for snippet folders fetch");
return res.status(400).json({ error: "Invalid userId" });
}
try {
const result = await db
.select()
.from(snippetFolders)
.where(eq(snippetFolders.userId, userId))
.orderBy(asc(snippetFolders.name));
res.json(result);
} catch (err) {
authLogger.error("Failed to fetch snippet folders", err);
res.status(500).json({ error: "Failed to fetch snippet folders" });
}
},
);
// Create a new snippet folder
// POST /snippets/folders
router.post(
"/folders",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name, color, icon } = req.body;
if (!isNonEmptyString(userId) || !isNonEmptyString(name)) {
authLogger.warn("Invalid snippet folder creation data", {
operation: "snippet_folder_create",
userId,
hasName: !!name,
});
return res.status(400).json({ error: "Folder name is required" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(eq(snippetFolders.userId, userId), eq(snippetFolders.name, name)),
);
if (existing.length > 0) {
return res
.status(409)
.json({ error: "Folder with this name already exists" });
}
const insertData = {
userId,
name: name.trim(),
color: color?.trim() || null,
icon: icon?.trim() || null,
};
const result = await db
.insert(snippetFolders)
.values(insertData)
.returning();
authLogger.success(`Snippet folder created: ${name} by user ${userId}`, {
operation: "snippet_folder_create_success",
userId,
name,
});
res.status(201).json(result[0]);
} catch (err) {
authLogger.error("Failed to create snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to create snippet folder",
});
}
},
);
// Update snippet folder metadata (color, icon)
// PUT /snippets/folders/:name/metadata
router.put(
"/folders/:name/metadata",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name } = req.params;
const { color, icon } = req.body;
if (!isNonEmptyString(userId) || !name) {
authLogger.warn("Invalid request for snippet folder metadata update");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
if (existing.length === 0) {
return res.status(404).json({ error: "Folder not found" });
}
const updateFields: Partial<{
color: string | null;
icon: string | null;
updatedAt: ReturnType<typeof sql.raw>;
}> = {
updatedAt: sql`CURRENT_TIMESTAMP`,
};
if (color !== undefined) updateFields.color = color?.trim() || null;
if (icon !== undefined) updateFields.icon = icon?.trim() || null;
await db
.update(snippetFolders)
.set(updateFields)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
const updated = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, decodeURIComponent(name)),
),
);
authLogger.success(
`Snippet folder metadata updated: ${name} by user ${userId}`,
{
operation: "snippet_folder_metadata_update_success",
userId,
name,
},
);
res.json(updated[0]);
} catch (err) {
authLogger.error("Failed to update snippet folder metadata", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to update snippet folder metadata",
});
}
},
);
// Rename snippet folder
// PUT /snippets/folders/rename
router.put(
"/folders/rename",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { oldName, newName } = req.body;
if (
!isNonEmptyString(userId) ||
!isNonEmptyString(oldName) ||
!isNonEmptyString(newName)
) {
authLogger.warn("Invalid request for snippet folder rename");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, oldName),
),
);
if (existing.length === 0) {
return res.status(404).json({ error: "Folder not found" });
}
const nameExists = await db
.select()
.from(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, newName),
),
);
if (nameExists.length > 0) {
return res
.status(409)
.json({ error: "Folder with new name already exists" });
}
await db
.update(snippetFolders)
.set({ name: newName, updatedAt: sql`CURRENT_TIMESTAMP` })
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, oldName),
),
);
await db
.update(snippets)
.set({ folder: newName })
.where(and(eq(snippets.userId, userId), eq(snippets.folder, oldName)));
authLogger.success(
`Snippet folder renamed: ${oldName} -> ${newName} by user ${userId}`,
{
operation: "snippet_folder_rename_success",
userId,
oldName,
newName,
},
);
res.json({ success: true, oldName, newName });
} catch (err) {
authLogger.error("Failed to rename snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to rename snippet folder",
});
}
},
);
// Delete snippet folder
// DELETE /snippets/folders/:name
router.delete(
"/folders/:name",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name } = req.params;
if (!isNonEmptyString(userId) || !name) {
authLogger.warn("Invalid request for snippet folder delete");
return res.status(400).json({ error: "Invalid request" });
}
try {
const folderName = decodeURIComponent(name);
await db
.update(snippets)
.set({ folder: null })
.where(
and(eq(snippets.userId, userId), eq(snippets.folder, folderName)),
);
await db
.delete(snippetFolders)
.where(
and(
eq(snippetFolders.userId, userId),
eq(snippetFolders.name, folderName),
),
);
authLogger.success(
`Snippet folder deleted: ${folderName} by user ${userId}`,
{
operation: "snippet_folder_delete_success",
userId,
name: folderName,
},
);
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to delete snippet folder", err);
res.status(500).json({
error:
err instanceof Error
? err.message
: "Failed to delete snippet folder",
});
}
},
);
// Reorder snippets (bulk update)
// PUT /snippets/reorder
router.put(
"/reorder",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { snippets: snippetUpdates } = req.body;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for snippet reorder");
return res.status(400).json({ error: "Invalid userId" });
}
if (!Array.isArray(snippetUpdates) || snippetUpdates.length === 0) {
authLogger.warn("Invalid snippet reorder data", {
operation: "snippet_reorder",
userId,
});
return res
.status(400)
.json({ error: "snippets array is required and must not be empty" });
}
try {
for (const update of snippetUpdates) {
const { id, order, folder } = update;
if (!id || order === undefined) {
continue;
}
const updateFields: Partial<{
order: number;
folder: string | null;
}> = {
order,
};
if (folder !== undefined) {
updateFields.folder = folder?.trim() || null;
}
await db
.update(snippets)
.set(updateFields)
.where(and(eq(snippets.id, id), eq(snippets.userId, userId)));
}
authLogger.success(`Snippets reordered by user ${userId}`, {
operation: "snippet_reorder_success",
userId,
count: snippetUpdates.length,
});
res.json({ success: true, updated: snippetUpdates.length });
} catch (err) {
authLogger.error("Failed to reorder snippets", err);
res.status(500).json({
error:
err instanceof Error ? err.message : "Failed to reorder snippets",
});
}
},
);
// Execute a snippet on a host
// POST /snippets/execute
router.post(
"/execute",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { snippetId, hostId } = req.body;
if (!isNonEmptyString(userId) || !snippetId || !hostId) {
authLogger.warn("Invalid snippet execution request", {
userId,
snippetId,
hostId,
});
return res
.status(400)
.json({ error: "Snippet ID and Host ID are required" });
}
try {
const snippetResult = await db
.select()
.from(snippets)
.where(
and(
eq(snippets.id, parseInt(snippetId)),
eq(snippets.userId, userId),
),
);
if (snippetResult.length === 0) {
return res.status(404).json({ error: "Snippet not found" });
}
const snippet = snippetResult[0];
const { Client } = await import("ssh2");
const { sshData, sshCredentials } = await import("../db/schema.js");
const { SimpleDBOps } = await import("../../utils/simple-db-ops.js");
const hostResult = await SimpleDBOps.select(
db
.select()
.from(sshData)
.where(
and(eq(sshData.id, parseInt(hostId)), eq(sshData.userId, userId)),
),
"ssh_data",
userId,
);
if (hostResult.length === 0) {
return res.status(404).json({ error: "Host not found" });
}
const host = hostResult[0];
let password = host.password;
let privateKey = host.key;
let passphrase = host.key_password;
let authType = host.authType;
if (host.credentialId) {
const credResult = await SimpleDBOps.select(
db
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, host.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credResult.length > 0) {
const cred = credResult[0];
authType = (cred.auth_type || cred.authType || authType) as string;
password = (cred.password || undefined) as string | undefined;
privateKey = (cred.private_key || cred.key || undefined) as
| string
| undefined;
passphrase = (cred.key_password || undefined) as string | undefined;
}
}
const conn = new Client();
let output = "";
let errorOutput = "";
const executePromise = new Promise<{
success: boolean;
output: string;
error?: string;
}>((resolve, reject) => {
const timeout = setTimeout(() => {
conn.end();
reject(new Error("Command execution timeout (30s)"));
}, 30000);
conn.on("ready", () => {
conn.exec(snippet.content, (err, stream) => {
if (err) {
clearTimeout(timeout);
conn.end();
return reject(err);
}
stream.on("close", () => {
clearTimeout(timeout);
conn.end();
if (errorOutput) {
resolve({ success: false, output, error: errorOutput });
} else {
resolve({ success: true, output });
}
});
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.stderr.on("data", (data: Buffer) => {
errorOutput += data.toString();
});
});
});
conn.on("error", (err) => {
clearTimeout(timeout);
reject(err);
});
const config: any = {
host: host.ip,
port: host.port,
username: host.username,
tryKeyboard: true,
keepaliveInterval: 30000,
keepaliveCountMax: 3,
readyTimeout: 30000,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
timeout: 30000,
env: {
TERM: "xterm-256color",
LANG: "en_US.UTF-8",
LC_ALL: "en_US.UTF-8",
LC_CTYPE: "en_US.UTF-8",
LC_MESSAGES: "en_US.UTF-8",
LC_MONETARY: "en_US.UTF-8",
LC_NUMERIC: "en_US.UTF-8",
LC_TIME: "en_US.UTF-8",
LC_COLLATE: "en_US.UTF-8",
COLORTERM: "truecolor",
},
algorithms: {
kex: [
"curve25519-sha256",
"curve25519-sha256@libssh.org",
"ecdh-sha2-nistp521",
"ecdh-sha2-nistp384",
"ecdh-sha2-nistp256",
"diffie-hellman-group-exchange-sha256",
"diffie-hellman-group14-sha256",
"diffie-hellman-group14-sha1",
"diffie-hellman-group-exchange-sha1",
"diffie-hellman-group1-sha1",
],
serverHostKey: [
"ssh-ed25519",
"ecdsa-sha2-nistp521",
"ecdsa-sha2-nistp384",
"ecdsa-sha2-nistp256",
"rsa-sha2-512",
"rsa-sha2-256",
"ssh-rsa",
"ssh-dss",
],
cipher: [
"chacha20-poly1305@openssh.com",
"aes256-gcm@openssh.com",
"aes128-gcm@openssh.com",
"aes256-ctr",
"aes192-ctr",
"aes128-ctr",
"aes256-cbc",
"aes192-cbc",
"aes128-cbc",
"3des-cbc",
],
hmac: [
"hmac-sha2-512-etm@openssh.com",
"hmac-sha2-256-etm@openssh.com",
"hmac-sha2-512",
"hmac-sha2-256",
"hmac-sha1",
"hmac-md5",
],
compress: ["none", "zlib@openssh.com", "zlib"],
},
};
if (authType === "password" && password) {
config.password = password;
} else if (authType === "key" && privateKey) {
const cleanKey = (privateKey as string)
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (passphrase) {
config.passphrase = passphrase;
}
} else if (password) {
config.password = password;
} else if (privateKey) {
const cleanKey = (privateKey as string)
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (passphrase) {
config.passphrase = passphrase;
}
}
conn.connect(config);
});
const result = await executePromise;
authLogger.success(
`Snippet executed: ${snippet.name} on host ${hostId}`,
{
operation: "snippet_execute_success",
userId,
snippetId,
hostId,
},
);
res.json(result);
} catch (err) {
authLogger.error("Failed to execute snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to execute snippet",
});
}
},
);
// Get all snippets for the authenticated user
// GET /snippets
router.get(
"/",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
if (!isNonEmptyString(userId)) {
authLogger.warn("Invalid userId for snippets fetch");
return res.status(400).json({ error: "Invalid userId" });
}
try {
const result = await db
.select()
.from(snippets)
.where(eq(snippets.userId, userId))
.orderBy(
sql`CASE WHEN ${snippets.folder} IS NULL OR ${snippets.folder} = '' THEN 0 ELSE 1 END`,
asc(snippets.folder),
asc(snippets.order),
desc(snippets.updatedAt),
);
res.json(result);
} catch (err) {
authLogger.error("Failed to fetch snippets", err);
res.status(500).json({ error: "Failed to fetch snippets" });
}
},
);
// Get a specific snippet by ID
// GET /snippets/:id
router.get(
"/:id",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
const snippetId = parseInt(id, 10);
if (!isNonEmptyString(userId) || isNaN(snippetId)) {
authLogger.warn("Invalid request for snippet fetch: invalid ID", {
userId,
id,
});
return res.status(400).json({ error: "Invalid request parameters" });
}
try {
const result = await db
.select()
.from(snippets)
.where(and(eq(snippets.id, parseInt(id)), eq(snippets.userId, userId)));
if (result.length === 0) {
return res.status(404).json({ error: "Snippet not found" });
}
res.json(result[0]);
} catch (err) {
authLogger.error("Failed to fetch snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to fetch snippet",
});
}
},
);
// Create a new snippet
// POST /snippets
router.post(
"/",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { name, content, description, folder, order } = req.body;
if (
!isNonEmptyString(userId) ||
!isNonEmptyString(name) ||
!isNonEmptyString(content)
) {
authLogger.warn("Invalid snippet creation data validation failed", {
operation: "snippet_create",
userId,
hasName: !!name,
hasContent: !!content,
});
return res.status(400).json({ error: "Name and content are required" });
}
try {
let snippetOrder = order;
if (snippetOrder === undefined || snippetOrder === null) {
const folderValue = folder?.trim() || "";
const maxOrderResult = await db
.select({ maxOrder: sql<number>`MAX(${snippets.order})` })
.from(snippets)
.where(
and(
eq(snippets.userId, userId),
folderValue
? eq(snippets.folder, folderValue)
: sql`(${snippets.folder} IS NULL OR ${snippets.folder} = '')`,
),
);
const maxOrder = maxOrderResult[0]?.maxOrder ?? -1;
snippetOrder = maxOrder + 1;
}
const insertData = {
userId,
name: name.trim(),
content: content.trim(),
description: description?.trim() || null,
folder: folder?.trim() || null,
order: snippetOrder,
};
const result = await db.insert(snippets).values(insertData).returning();
authLogger.success(`Snippet created: ${name} by user ${userId}`, {
operation: "snippet_create_success",
userId,
snippetId: result[0].id,
name,
});
res.status(201).json(result[0]);
} catch (err) {
authLogger.error("Failed to create snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to create snippet",
});
}
},
);
// Update a snippet
// PUT /snippets/:id
router.put(
"/:id",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
const updateData = req.body;
if (!isNonEmptyString(userId) || !id) {
authLogger.warn("Invalid request for snippet update");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippets)
.where(and(eq(snippets.id, parseInt(id)), eq(snippets.userId, userId)));
if (existing.length === 0) {
return res.status(404).json({ error: "Snippet not found" });
}
const updateFields: Partial<{
updatedAt: ReturnType<typeof sql.raw>;
name: string;
content: string;
description: string | null;
folder: string | null;
order: number;
}> = {
updatedAt: sql`CURRENT_TIMESTAMP`,
};
if (updateData.name !== undefined)
updateFields.name = updateData.name.trim();
if (updateData.content !== undefined)
updateFields.content = updateData.content.trim();
if (updateData.description !== undefined)
updateFields.description = updateData.description?.trim() || null;
if (updateData.folder !== undefined)
updateFields.folder = updateData.folder?.trim() || null;
if (updateData.order !== undefined) updateFields.order = updateData.order;
await db
.update(snippets)
.set(updateFields)
.where(and(eq(snippets.id, parseInt(id)), eq(snippets.userId, userId)));
const updated = await db
.select()
.from(snippets)
.where(eq(snippets.id, parseInt(id)));
authLogger.success(
`Snippet updated: ${updated[0].name} by user ${userId}`,
{
operation: "snippet_update_success",
userId,
snippetId: parseInt(id),
name: updated[0].name,
},
);
res.json(updated[0]);
} catch (err) {
authLogger.error("Failed to update snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to update snippet",
});
}
},
);
// Delete a snippet
// DELETE /snippets/:id
router.delete(
"/:id",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { id } = req.params;
if (!isNonEmptyString(userId) || !id) {
authLogger.warn("Invalid request for snippet delete");
return res.status(400).json({ error: "Invalid request" });
}
try {
const existing = await db
.select()
.from(snippets)
.where(and(eq(snippets.id, parseInt(id)), eq(snippets.userId, userId)));
if (existing.length === 0) {
return res.status(404).json({ error: "Snippet not found" });
}
await db
.delete(snippets)
.where(and(eq(snippets.id, parseInt(id)), eq(snippets.userId, userId)));
authLogger.success(
`Snippet deleted: ${existing[0].name} by user ${userId}`,
{
operation: "snippet_delete_success",
userId,
snippetId: parseInt(id),
name: existing[0].name,
},
);
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to delete snippet", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to delete snippet",
});
}
},
);
export default router;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,195 @@
import type { AuthenticatedRequest } from "../../../types/index.js";
import express from "express";
import { db } from "../db/index.js";
import { commandHistory } from "../db/schema.js";
import { eq, and, desc, sql } from "drizzle-orm";
import type { Request, Response } from "express";
import { authLogger } from "../../utils/logger.js";
import { AuthManager } from "../../utils/auth-manager.js";
const router = express.Router();
function isNonEmptyString(val: unknown): val is string {
return typeof val === "string" && val.trim().length > 0;
}
const authManager = AuthManager.getInstance();
const authenticateJWT = authManager.createAuthMiddleware();
const requireDataAccess = authManager.createDataAccessMiddleware();
// Save command to history
// POST /terminal/command_history
router.post(
"/command_history",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId, command } = req.body;
if (!isNonEmptyString(userId) || !hostId || !isNonEmptyString(command)) {
authLogger.warn("Invalid command history save request", {
operation: "command_history_save",
userId,
hasHostId: !!hostId,
hasCommand: !!command,
});
return res.status(400).json({ error: "Missing required parameters" });
}
try {
const insertData = {
userId,
hostId: parseInt(hostId, 10),
command: command.trim(),
};
const result = await db
.insert(commandHistory)
.values(insertData)
.returning();
res.status(201).json(result[0]);
} catch (err) {
authLogger.error("Failed to save command to history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to save command",
});
}
},
);
// Get command history for a specific host
// GET /terminal/command_history/:hostId
router.get(
"/command_history/:hostId",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.params;
const hostIdNum = parseInt(hostId, 10);
if (!isNonEmptyString(userId) || isNaN(hostIdNum)) {
authLogger.warn("Invalid command history fetch request", {
userId,
hostId: hostIdNum,
});
return res.status(400).json({ error: "Invalid request parameters" });
}
try {
const result = await db
.select({
command: commandHistory.command,
maxExecutedAt: sql<number>`MAX(${commandHistory.executedAt})`,
})
.from(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
),
)
.groupBy(commandHistory.command)
.orderBy(desc(sql`MAX(${commandHistory.executedAt})`))
.limit(500);
const uniqueCommands = result.map((r) => r.command);
res.json(uniqueCommands);
} catch (err) {
authLogger.error("Failed to fetch command history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to fetch history",
});
}
},
);
// Delete a specific command from history
// POST /terminal/command_history/delete
router.post(
"/command_history/delete",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId, command } = req.body;
if (!isNonEmptyString(userId) || !hostId || !isNonEmptyString(command)) {
authLogger.warn("Invalid command delete request", {
operation: "command_history_delete",
userId,
hasHostId: !!hostId,
hasCommand: !!command,
});
return res.status(400).json({ error: "Missing required parameters" });
}
try {
const hostIdNum = parseInt(hostId, 10);
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
eq(commandHistory.command, command.trim()),
),
);
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to delete command from history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to delete command",
});
}
},
);
// Clear command history for a specific host (optional feature)
// DELETE /terminal/command_history/:hostId
router.delete(
"/command_history/:hostId",
authenticateJWT,
requireDataAccess,
async (req: Request, res: Response) => {
const userId = (req as AuthenticatedRequest).userId;
const { hostId } = req.params;
const hostIdNum = parseInt(hostId, 10);
if (!isNonEmptyString(userId) || isNaN(hostIdNum)) {
authLogger.warn("Invalid command history clear request");
return res.status(400).json({ error: "Invalid request" });
}
try {
await db
.delete(commandHistory)
.where(
and(
eq(commandHistory.userId, userId),
eq(commandHistory.hostId, hostIdNum),
),
);
authLogger.success(`Command history cleared for host ${hostId}`, {
operation: "command_history_clear_success",
userId,
hostId: hostIdNum,
});
res.json({ success: true });
} catch (err) {
authLogger.error("Failed to clear command history", err);
res.status(500).json({
error: err instanceof Error ? err.message : "Failed to clear history",
});
}
},
);
export default router;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,632 @@
import { Client as SSHClient } from "ssh2";
import { WebSocketServer, WebSocket } from "ws";
import { parse as parseUrl } from "url";
import { AuthManager } from "../utils/auth-manager.js";
import { sshData, sshCredentials } from "../database/db/schema.js";
import { and, eq } from "drizzle-orm";
import { getDb } from "../database/db/index.js";
import { SimpleDBOps } from "../utils/simple-db-ops.js";
import { systemLogger } from "../utils/logger.js";
import type { SSHHost } from "../../types/index.js";
const dockerConsoleLogger = systemLogger;
interface SSHSession {
client: SSHClient;
stream: any;
isConnected: boolean;
containerId?: string;
shell?: string;
}
const activeSessions = new Map<string, SSHSession>();
const wss = new WebSocketServer({
host: "0.0.0.0",
port: 30008,
verifyClient: async (info) => {
try {
const url = parseUrl(info.req.url || "", true);
const token = url.query.token as string;
if (!token) {
return false;
}
const authManager = AuthManager.getInstance();
const decoded = await authManager.verifyJWTToken(token);
if (!decoded || !decoded.userId) {
return false;
}
return true;
} catch (error) {
return false;
}
},
});
async function detectShell(
session: SSHSession,
containerId: string,
): Promise<string> {
const shells = ["bash", "sh", "ash"];
for (const shell of shells) {
try {
await new Promise<void>((resolve, reject) => {
session.client.exec(
`docker exec ${containerId} which ${shell}`,
(err, stream) => {
if (err) return reject(err);
let output = "";
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.on("close", (code: number) => {
if (code === 0 && output.trim()) {
resolve();
} else {
reject(new Error(`Shell ${shell} not found`));
}
});
stream.stderr.on("data", () => {
// Ignore stderr
});
},
);
});
return shell;
} catch {
continue;
}
}
return "sh";
}
async function createJumpHostChain(
jumpHosts: any[],
userId: string,
): Promise<SSHClient | null> {
if (!jumpHosts || jumpHosts.length === 0) {
return null;
}
let currentClient: SSHClient | null = null;
for (let i = 0; i < jumpHosts.length; i++) {
const jumpHostId = jumpHosts[i].hostId;
const jumpHostData = await SimpleDBOps.select(
getDb()
.select()
.from(sshData)
.where(and(eq(sshData.id, jumpHostId), eq(sshData.userId, userId))),
"ssh_data",
userId,
);
if (jumpHostData.length === 0) {
throw new Error(`Jump host ${jumpHostId} not found`);
}
const jumpHost = jumpHostData[0] as unknown as SSHHost;
if (typeof jumpHost.jumpHosts === "string" && jumpHost.jumpHosts) {
try {
jumpHost.jumpHosts = JSON.parse(jumpHost.jumpHosts);
} catch (e) {
dockerConsoleLogger.error("Failed to parse jump hosts", e, {
hostId: jumpHost.id,
});
jumpHost.jumpHosts = [];
}
}
let resolvedCredentials: any = {
password: jumpHost.password,
sshKey: jumpHost.key,
keyPassword: jumpHost.keyPassword,
authType: jumpHost.authType,
};
if (jumpHost.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, jumpHost.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
resolvedCredentials = {
password: credential.password,
sshKey:
credential.private_key || credential.privateKey || credential.key,
keyPassword: credential.key_password || credential.keyPassword,
authType: credential.auth_type || credential.authType,
};
}
}
const client = new SSHClient();
const config: any = {
host: jumpHost.ip,
port: jumpHost.port || 22,
username: jumpHost.username,
tryKeyboard: true,
readyTimeout: 60000,
keepaliveInterval: 30000,
keepaliveCountMax: 120,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
};
if (
resolvedCredentials.authType === "password" &&
resolvedCredentials.password
) {
config.password = resolvedCredentials.password;
} else if (
resolvedCredentials.authType === "key" &&
resolvedCredentials.sshKey
) {
const cleanKey = resolvedCredentials.sshKey
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (resolvedCredentials.keyPassword) {
config.passphrase = resolvedCredentials.keyPassword;
}
}
if (currentClient) {
await new Promise<void>((resolve, reject) => {
currentClient!.forwardOut(
"127.0.0.1",
0,
jumpHost.ip,
jumpHost.port || 22,
(err, stream) => {
if (err) return reject(err);
config.sock = stream;
resolve();
},
);
});
}
await new Promise<void>((resolve, reject) => {
client.on("ready", () => resolve());
client.on("error", reject);
client.connect(config);
});
currentClient = client;
}
return currentClient;
}
wss.on("connection", async (ws: WebSocket, req) => {
const userId = (req as any).userId;
const sessionId = `docker-console-${Date.now()}-${Math.random()}`;
let sshSession: SSHSession | null = null;
ws.on("message", async (data) => {
try {
const message = JSON.parse(data.toString());
switch (message.type) {
case "connect": {
const { hostConfig, containerId, shell, cols, rows } =
message.data as {
hostConfig: SSHHost;
containerId: string;
shell?: string;
cols?: number;
rows?: number;
};
if (
typeof hostConfig.jumpHosts === "string" &&
hostConfig.jumpHosts
) {
try {
hostConfig.jumpHosts = JSON.parse(hostConfig.jumpHosts);
} catch (e) {
dockerConsoleLogger.error("Failed to parse jump hosts", e, {
hostId: hostConfig.id,
});
hostConfig.jumpHosts = [];
}
}
if (!hostConfig || !containerId) {
ws.send(
JSON.stringify({
type: "error",
message: "Host configuration and container ID are required",
}),
);
return;
}
if (!hostConfig.enableDocker) {
ws.send(
JSON.stringify({
type: "error",
message:
"Docker is not enabled for this host. Enable it in Host Settings.",
}),
);
return;
}
try {
let resolvedCredentials: any = {
password: hostConfig.password,
sshKey: hostConfig.key,
keyPassword: hostConfig.keyPassword,
authType: hostConfig.authType,
};
if (hostConfig.credentialId) {
const credentials = await SimpleDBOps.select(
getDb()
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.id, hostConfig.credentialId as number),
eq(sshCredentials.userId, userId),
),
),
"ssh_credentials",
userId,
);
if (credentials.length > 0) {
const credential = credentials[0];
resolvedCredentials = {
password: credential.password,
sshKey:
credential.private_key ||
credential.privateKey ||
credential.key,
keyPassword:
credential.key_password || credential.keyPassword,
authType: credential.auth_type || credential.authType,
};
}
}
const client = new SSHClient();
const config: any = {
host: hostConfig.ip,
port: hostConfig.port || 22,
username: hostConfig.username,
tryKeyboard: true,
readyTimeout: 60000,
keepaliveInterval: 30000,
keepaliveCountMax: 120,
tcpKeepAlive: true,
tcpKeepAliveInitialDelay: 30000,
};
if (
resolvedCredentials.authType === "password" &&
resolvedCredentials.password
) {
config.password = resolvedCredentials.password;
} else if (
resolvedCredentials.authType === "key" &&
resolvedCredentials.sshKey
) {
const cleanKey = resolvedCredentials.sshKey
.trim()
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n");
config.privateKey = Buffer.from(cleanKey, "utf8");
if (resolvedCredentials.keyPassword) {
config.passphrase = resolvedCredentials.keyPassword;
}
}
if (hostConfig.jumpHosts && hostConfig.jumpHosts.length > 0) {
const jumpClient = await createJumpHostChain(
hostConfig.jumpHosts,
userId,
);
if (jumpClient) {
const stream = await new Promise<any>((resolve, reject) => {
jumpClient.forwardOut(
"127.0.0.1",
0,
hostConfig.ip,
hostConfig.port || 22,
(err, stream) => {
if (err) return reject(err);
resolve(stream);
},
);
});
config.sock = stream;
}
}
await new Promise<void>((resolve, reject) => {
client.on("ready", () => resolve());
client.on("error", reject);
client.connect(config);
});
sshSession = {
client,
stream: null,
isConnected: true,
containerId,
};
activeSessions.set(sessionId, sshSession);
let shellToUse = shell || "bash";
if (shell) {
try {
await new Promise<void>((resolve, reject) => {
client.exec(
`docker exec ${containerId} which ${shell}`,
(err, stream) => {
if (err) return reject(err);
let output = "";
stream.on("data", (data: Buffer) => {
output += data.toString();
});
stream.on("close", (code: number) => {
if (code === 0 && output.trim()) {
resolve();
} else {
reject(new Error(`Shell ${shell} not available`));
}
});
stream.stderr.on("data", () => {
// Ignore stderr
});
},
);
});
} catch {
dockerConsoleLogger.warn(
`Requested shell ${shell} not found, detecting available shell`,
{
operation: "shell_validation",
sessionId,
containerId,
requestedShell: shell,
},
);
shellToUse = await detectShell(sshSession, containerId);
}
} else {
shellToUse = await detectShell(sshSession, containerId);
}
sshSession.shell = shellToUse;
const execCommand = `docker exec -it ${containerId} /bin/${shellToUse}`;
client.exec(
execCommand,
{
pty: {
term: "xterm-256color",
cols: cols || 80,
rows: rows || 24,
},
},
(err, stream) => {
if (err) {
dockerConsoleLogger.error(
"Failed to create docker exec",
err,
{
operation: "docker_exec",
sessionId,
containerId,
},
);
ws.send(
JSON.stringify({
type: "error",
message: `Failed to start console: ${err.message}`,
}),
);
return;
}
sshSession!.stream = stream;
stream.on("data", (data: Buffer) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(
JSON.stringify({
type: "output",
data: data.toString("utf8"),
}),
);
}
});
stream.stderr.on("data", (data: Buffer) => {});
stream.on("close", () => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(
JSON.stringify({
type: "disconnected",
message: "Console session ended",
}),
);
}
if (sshSession) {
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
ws.send(
JSON.stringify({
type: "connected",
data: {
shell: shellToUse,
requestedShell: shell,
shellChanged: shell && shell !== shellToUse,
},
}),
);
},
);
} catch (error) {
dockerConsoleLogger.error("Failed to connect to container", error, {
operation: "console_connect",
sessionId,
containerId: message.data.containerId,
});
ws.send(
JSON.stringify({
type: "error",
message:
error instanceof Error
? error.message
: "Failed to connect to container",
}),
);
}
break;
}
case "input": {
if (sshSession && sshSession.stream) {
sshSession.stream.write(message.data);
}
break;
}
case "resize": {
if (sshSession && sshSession.stream) {
const { cols, rows } = message.data;
sshSession.stream.setWindow(rows, cols);
}
break;
}
case "disconnect": {
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
ws.send(
JSON.stringify({
type: "disconnected",
message: "Disconnected from container",
}),
);
}
break;
}
case "ping": {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type: "pong" }));
}
break;
}
default:
dockerConsoleLogger.warn("Unknown message type", {
operation: "ws_message",
type: message.type,
});
}
} catch (error) {
dockerConsoleLogger.error("WebSocket message error", error, {
operation: "ws_message",
sessionId,
});
ws.send(
JSON.stringify({
type: "error",
message: error instanceof Error ? error.message : "An error occurred",
}),
);
}
});
ws.on("close", () => {
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
ws.on("error", (error) => {
dockerConsoleLogger.error("WebSocket error", error, {
operation: "ws_error",
sessionId,
});
if (sshSession) {
if (sshSession.stream) {
sshSession.stream.end();
}
sshSession.client.end();
activeSessions.delete(sessionId);
}
});
});
process.on("SIGTERM", () => {
activeSessions.forEach((session, sessionId) => {
if (session.stream) {
session.stream.end();
}
session.client.end();
});
activeSessions.clear();
wss.close(() => {
process.exit(0);
});
});

1904
src/backend/ssh/docker.ts Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,101 @@
import type { Client } from "ssh2";
export function execCommand(
client: Client,
command: string,
timeoutMs = 30000,
): Promise<{
stdout: string;
stderr: string;
code: number | null;
}> {
return new Promise((resolve, reject) => {
let settled = false;
let stream: any = null;
const timeout = setTimeout(() => {
if (!settled) {
settled = true;
cleanup();
reject(new Error(`Command timeout after ${timeoutMs}ms: ${command}`));
}
}, timeoutMs);
const cleanup = () => {
clearTimeout(timeout);
if (stream) {
try {
stream.removeAllListeners();
if (stream.stderr) {
stream.stderr.removeAllListeners();
}
stream.destroy();
} catch (error) {
// Ignore cleanup errors
}
}
};
client.exec(command, { pty: false }, (err, _stream) => {
if (err) {
if (!settled) {
settled = true;
cleanup();
reject(err);
}
return;
}
stream = _stream;
let stdout = "";
let stderr = "";
let exitCode: number | null = null;
stream
.on("close", (code: number | undefined) => {
if (!settled) {
settled = true;
exitCode = typeof code === "number" ? code : null;
cleanup();
resolve({ stdout, stderr, code: exitCode });
}
})
.on("data", (data: Buffer) => {
stdout += data.toString("utf8");
})
.on("error", (streamErr: Error) => {
if (!settled) {
settled = true;
cleanup();
reject(streamErr);
}
});
if (stream.stderr) {
stream.stderr
.on("data", (data: Buffer) => {
stderr += data.toString("utf8");
})
.on("error", (stderrErr: Error) => {
if (!settled) {
settled = true;
cleanup();
reject(stderrErr);
}
});
}
});
});
}
export function toFixedNum(
n: number | null | undefined,
digits = 2,
): number | null {
if (typeof n !== "number" || !Number.isFinite(n)) return null;
return Number(n.toFixed(digits));
}
export function kibToGiB(kib: number): number {
return kib / (1024 * 1024);
}

View File

@@ -0,0 +1,91 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum } from "./common-utils.js";
function parseCpuLine(
cpuLine: string,
): { total: number; idle: number } | undefined {
const parts = cpuLine.trim().split(/\s+/);
if (parts[0] !== "cpu") return undefined;
const nums = parts
.slice(1)
.map((n) => Number(n))
.filter((n) => Number.isFinite(n));
if (nums.length < 4) return undefined;
const idle = (nums[3] ?? 0) + (nums[4] ?? 0);
const total = nums.reduce((a, b) => a + b, 0);
return { total, idle };
}
export async function collectCpuMetrics(client: Client): Promise<{
percent: number | null;
cores: number | null;
load: [number, number, number] | null;
}> {
let cpuPercent: number | null = null;
let cores: number | null = null;
let loadTriplet: [number, number, number] | null = null;
try {
const [stat1, loadAvgOut, coresOut] = await Promise.race([
Promise.all([
execCommand(client, "cat /proc/stat"),
execCommand(client, "cat /proc/loadavg"),
execCommand(
client,
"nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo",
),
]),
new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error("CPU metrics collection timeout")),
25000,
),
),
]);
await new Promise((r) => setTimeout(r, 500));
const stat2 = await execCommand(client, "cat /proc/stat");
const cpuLine1 = (
stat1.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const cpuLine2 = (
stat2.stdout.split("\n").find((l) => l.startsWith("cpu ")) || ""
).trim();
const a = parseCpuLine(cpuLine1);
const b = parseCpuLine(cpuLine2);
if (a && b) {
const totalDiff = b.total - a.total;
const idleDiff = b.idle - a.idle;
const used = totalDiff - idleDiff;
if (totalDiff > 0)
cpuPercent = Math.max(0, Math.min(100, (used / totalDiff) * 100));
}
const laParts = loadAvgOut.stdout.trim().split(/\s+/);
if (laParts.length >= 3) {
loadTriplet = [
Number(laParts[0]),
Number(laParts[1]),
Number(laParts[2]),
].map((v) => (Number.isFinite(v) ? Number(v) : 0)) as [
number,
number,
number,
];
}
const coresNum = Number((coresOut.stdout || "").trim());
cores = Number.isFinite(coresNum) && coresNum > 0 ? coresNum : null;
} catch (e) {
cpuPercent = null;
cores = null;
loadTriplet = null;
}
return {
percent: toFixedNum(cpuPercent, 0),
cores,
load: loadTriplet,
};
}

View File

@@ -0,0 +1,67 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum } from "./common-utils.js";
export async function collectDiskMetrics(client: Client): Promise<{
percent: number | null;
usedHuman: string | null;
totalHuman: string | null;
availableHuman: string | null;
}> {
let diskPercent: number | null = null;
let usedHuman: string | null = null;
let totalHuman: string | null = null;
let availableHuman: string | null = null;
try {
const [diskOutHuman, diskOutBytes] = await Promise.all([
execCommand(client, "df -h -P / | tail -n +2"),
execCommand(client, "df -B1 -P / | tail -n +2"),
]);
const humanLine =
diskOutHuman.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const bytesLine =
diskOutBytes.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean)[0] || "";
const humanParts = humanLine.split(/\s+/);
const bytesParts = bytesLine.split(/\s+/);
if (humanParts.length >= 6 && bytesParts.length >= 6) {
totalHuman = humanParts[1] || null;
usedHuman = humanParts[2] || null;
availableHuman = humanParts[3] || null;
const totalBytes = Number(bytesParts[1]);
const usedBytes = Number(bytesParts[2]);
if (
Number.isFinite(totalBytes) &&
Number.isFinite(usedBytes) &&
totalBytes > 0
) {
diskPercent = Math.max(
0,
Math.min(100, (usedBytes / totalBytes) * 100),
);
}
}
} catch (e) {
diskPercent = null;
usedHuman = null;
totalHuman = null;
availableHuman = null;
}
return {
percent: toFixedNum(diskPercent, 0),
usedHuman,
totalHuman,
availableHuman,
};
}

View File

@@ -0,0 +1,137 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export interface LoginRecord {
user: string;
ip: string;
time: string;
status: "success" | "failed";
}
export interface LoginStats {
recentLogins: LoginRecord[];
failedLogins: LoginRecord[];
totalLogins: number;
uniqueIPs: number;
}
export async function collectLoginStats(client: Client): Promise<LoginStats> {
const recentLogins: LoginRecord[] = [];
const failedLogins: LoginRecord[] = [];
const ipSet = new Set<string>();
try {
const lastOut = await execCommand(
client,
"last -n 20 -F -w | grep -v 'reboot' | grep -v 'wtmp' | head -20",
);
const lastLines = lastOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
for (const line of lastLines) {
const parts = line.split(/\s+/);
if (parts.length >= 10) {
const user = parts[0];
const tty = parts[1];
const ip =
parts[2] === ":" || parts[2].startsWith(":") ? "local" : parts[2];
const timeStart = parts.indexOf(
parts.find((p) => /^(Mon|Tue|Wed|Thu|Fri|Sat|Sun)/.test(p)) || "",
);
if (timeStart > 0 && parts.length > timeStart + 4) {
const timeStr = parts.slice(timeStart, timeStart + 5).join(" ");
if (user && user !== "wtmp" && tty !== "system") {
let parsedTime: string;
try {
const date = new Date(timeStr);
parsedTime = isNaN(date.getTime())
? new Date().toISOString()
: date.toISOString();
} catch (e) {
parsedTime = new Date().toISOString();
}
recentLogins.push({
user,
ip,
time: parsedTime,
status: "success",
});
if (ip !== "local") {
ipSet.add(ip);
}
}
}
}
}
} catch (e) {}
try {
const failedOut = await execCommand(
client,
"grep 'Failed password' /var/log/auth.log 2>/dev/null | tail -10 || grep 'authentication failure' /var/log/secure 2>/dev/null | tail -10 || echo ''",
);
const failedLines = failedOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
for (const line of failedLines) {
let user = "unknown";
let ip = "unknown";
let timeStr = "";
const userMatch = line.match(/for (?:invalid user )?(\S+)/);
if (userMatch) {
user = userMatch[1];
}
const ipMatch = line.match(/from (\d+\.\d+\.\d+\.\d+)/);
if (ipMatch) {
ip = ipMatch[1];
}
const dateMatch = line.match(/^(\w+\s+\d+\s+\d+:\d+:\d+)/);
if (dateMatch) {
const currentYear = new Date().getFullYear();
timeStr = `${currentYear} ${dateMatch[1]}`;
}
if (user && ip) {
let parsedTime: string;
try {
const date = timeStr ? new Date(timeStr) : new Date();
parsedTime = isNaN(date.getTime())
? new Date().toISOString()
: date.toISOString();
} catch (e) {
parsedTime = new Date().toISOString();
}
failedLogins.push({
user,
ip,
time: parsedTime,
status: "failed",
});
if (ip !== "unknown") {
ipSet.add(ip);
}
}
}
} catch (e) {}
return {
recentLogins: recentLogins.slice(0, 10),
failedLogins: failedLogins.slice(0, 10),
totalLogins: recentLogins.length,
uniqueIPs: ipSet.size,
};
}

View File

@@ -0,0 +1,41 @@
import type { Client } from "ssh2";
import { execCommand, toFixedNum, kibToGiB } from "./common-utils.js";
export async function collectMemoryMetrics(client: Client): Promise<{
percent: number | null;
usedGiB: number | null;
totalGiB: number | null;
}> {
let memPercent: number | null = null;
let usedGiB: number | null = null;
let totalGiB: number | null = null;
try {
const memInfo = await execCommand(client, "cat /proc/meminfo");
const lines = memInfo.stdout.split("\n");
const getVal = (key: string) => {
const line = lines.find((l) => l.startsWith(key));
if (!line) return null;
const m = line.match(/\d+/);
return m ? Number(m[0]) : null;
};
const totalKb = getVal("MemTotal:");
const availKb = getVal("MemAvailable:");
if (totalKb && availKb && totalKb > 0) {
const usedKb = totalKb - availKb;
memPercent = Math.max(0, Math.min(100, (usedKb / totalKb) * 100));
usedGiB = kibToGiB(usedKb);
totalGiB = kibToGiB(totalKb);
}
} catch (e) {
memPercent = null;
usedGiB = null;
totalGiB = null;
}
return {
percent: toFixedNum(memPercent, 0),
usedGiB: usedGiB ? toFixedNum(usedGiB, 2) : null,
totalGiB: totalGiB ? toFixedNum(totalGiB, 2) : null,
};
}

View File

@@ -0,0 +1,74 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectNetworkMetrics(client: Client): Promise<{
interfaces: Array<{
name: string;
ip: string;
state: string;
rxBytes: string | null;
txBytes: string | null;
}>;
}> {
const interfaces: Array<{
name: string;
ip: string;
state: string;
rxBytes: string | null;
txBytes: string | null;
}> = [];
try {
const ifconfigOut = await execCommand(
client,
"ip -o addr show | awk '{print $2,$4}' | grep -v '^lo'",
);
const netStatOut = await execCommand(
client,
"ip -o link show | awk '{gsub(/:/, \"\", $2); print $2,$9}'",
);
const addrs = ifconfigOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const states = netStatOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
const ifMap = new Map<string, { ip: string; state: string }>();
for (const line of addrs) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const ip = parts[1].split("/")[0];
if (!ifMap.has(name)) ifMap.set(name, { ip, state: "UNKNOWN" });
}
}
for (const line of states) {
const parts = line.split(/\s+/);
if (parts.length >= 2) {
const name = parts[0];
const state = parts[1];
const existing = ifMap.get(name);
if (existing) {
existing.state = state;
}
}
}
for (const [name, data] of ifMap.entries()) {
interfaces.push({
name,
ip: data.ip,
state: data.state,
rxBytes: null,
txBytes: null,
});
}
} catch (e) {}
return { interfaces };
}

View File

@@ -0,0 +1,64 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectProcessesMetrics(client: Client): Promise<{
total: number | null;
running: number | null;
top: Array<{
pid: string;
user: string;
cpu: string;
mem: string;
command: string;
}>;
}> {
let totalProcesses: number | null = null;
let runningProcesses: number | null = null;
const topProcesses: Array<{
pid: string;
user: string;
cpu: string;
mem: string;
command: string;
}> = [];
try {
const psOut = await execCommand(client, "ps aux --sort=-%cpu | head -n 11");
const psLines = psOut.stdout
.split("\n")
.map((l) => l.trim())
.filter(Boolean);
if (psLines.length > 1) {
for (let i = 1; i < Math.min(psLines.length, 11); i++) {
const parts = psLines[i].split(/\s+/);
if (parts.length >= 11) {
const cpuVal = Number(parts[2]);
const memVal = Number(parts[3]);
topProcesses.push({
pid: parts[1],
user: parts[0],
cpu: Number.isFinite(cpuVal) ? cpuVal.toString() : "0",
mem: Number.isFinite(memVal) ? memVal.toString() : "0",
command: parts.slice(10).join(" ").substring(0, 50),
});
}
}
}
const procCount = await execCommand(client, "ps aux | wc -l");
const runningCount = await execCommand(client, "ps aux | grep -c ' R '");
const totalCount = Number(procCount.stdout.trim()) - 1;
totalProcesses = Number.isFinite(totalCount) ? totalCount : null;
const runningCount2 = Number(runningCount.stdout.trim());
runningProcesses = Number.isFinite(runningCount2) ? runningCount2 : null;
} catch (e) {}
return {
total: totalProcesses,
running: runningProcesses,
top: topProcesses,
};
}

View File

@@ -0,0 +1,34 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectSystemMetrics(client: Client): Promise<{
hostname: string | null;
kernel: string | null;
os: string | null;
}> {
let hostname: string | null = null;
let kernel: string | null = null;
let os: string | null = null;
try {
const hostnameOut = await execCommand(client, "hostname");
const kernelOut = await execCommand(client, "uname -r");
const osOut = await execCommand(
client,
"cat /etc/os-release | grep '^PRETTY_NAME=' | cut -d'\"' -f2",
);
hostname = hostnameOut.stdout.trim() || null;
kernel = kernelOut.stdout.trim() || null;
os = osOut.stdout.trim() || null;
} catch (e) {
// No error log
}
return {
hostname,
kernel,
os,
};
}

View File

@@ -0,0 +1,30 @@
import type { Client } from "ssh2";
import { execCommand } from "./common-utils.js";
import { statsLogger } from "../../utils/logger.js";
export async function collectUptimeMetrics(client: Client): Promise<{
seconds: number | null;
formatted: string | null;
}> {
let uptimeSeconds: number | null = null;
let uptimeFormatted: string | null = null;
try {
const uptimeOut = await execCommand(client, "cat /proc/uptime");
const uptimeParts = uptimeOut.stdout.trim().split(/\s+/);
if (uptimeParts.length >= 1) {
uptimeSeconds = Number(uptimeParts[0]);
if (Number.isFinite(uptimeSeconds)) {
const days = Math.floor(uptimeSeconds / 86400);
const hours = Math.floor((uptimeSeconds % 86400) / 3600);
const minutes = Math.floor((uptimeSeconds % 3600) / 60);
uptimeFormatted = `${days}d ${hours}h ${minutes}m`;
}
}
} catch (e) {}
return {
seconds: uptimeSeconds,
formatted: uptimeFormatted,
};
}

View File

@@ -21,7 +21,7 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
if (persistentConfig.parsed) {
Object.assign(process.env, persistentConfig.parsed);
}
} catch {}
} catch (error) {}
let version = "unknown";
@@ -73,7 +73,7 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
version = foundVersion;
break;
}
} catch (error) {
} catch {
continue;
}
}
@@ -102,6 +102,9 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
await import("./ssh/tunnel.js");
await import("./ssh/file-manager.js");
await import("./ssh/server-stats.js");
await import("./ssh/docker.js");
await import("./ssh/docker-console.js");
await import("./dashboard.js");
process.on("SIGINT", () => {
systemLogger.info(
@@ -126,7 +129,7 @@ import { systemLogger, versionLogger } from "./utils/logger.js";
process.exit(1);
});
process.on("unhandledRejection", (reason, promise) => {
process.on("unhandledRejection", (reason) => {
systemLogger.error("Unhandled promise rejection", reason, {
operation: "error_handling",
});

View File

@@ -4,6 +4,11 @@ import { SystemCrypto } from "./system-crypto.js";
import { DataCrypto } from "./data-crypto.js";
import { databaseLogger } from "./logger.js";
import type { Request, Response, NextFunction } from "express";
import { db } from "../database/db/index.js";
import { sessions } from "../database/db/schema.js";
import { eq, and, sql } from "drizzle-orm";
import { nanoid } from "nanoid";
import type { DeviceType } from "./user-agent-parser.js";
interface AuthenticationResult {
success: boolean;
@@ -18,16 +23,28 @@ interface AuthenticationResult {
interface JWTPayload {
userId: string;
sessionId?: string;
pendingTOTP?: boolean;
iat?: number;
exp?: number;
}
interface AuthenticatedRequest extends Request {
userId?: string;
pendingTOTP?: boolean;
dataKey?: Buffer;
}
interface RequestWithHeaders extends Request {
headers: Request["headers"] & {
"x-forwarded-proto"?: string;
};
}
class AuthManager {
private static instance: AuthManager;
private systemCrypto: SystemCrypto;
private userCrypto: UserCrypto;
private invalidatedTokens: Set<string> = new Set();
private constructor() {
this.systemCrypto = SystemCrypto.getInstance();
@@ -36,6 +53,21 @@ class AuthManager {
this.userCrypto.setSessionExpiredCallback((userId: string) => {
this.invalidateUserTokens(userId);
});
setInterval(
() => {
this.cleanupExpiredSessions().catch((error) => {
databaseLogger.error(
"Failed to run periodic session cleanup",
error,
{
operation: "session_cleanup_periodic",
},
);
});
},
5 * 60 * 1000,
);
}
static getInstance(): AuthManager {
@@ -53,24 +85,25 @@ class AuthManager {
await this.userCrypto.setupUserEncryption(userId, password);
}
async registerOIDCUser(userId: string): Promise<void> {
await this.userCrypto.setupOIDCUserEncryption(userId);
async registerOIDCUser(
userId: string,
sessionDurationMs: number,
): Promise<void> {
await this.userCrypto.setupOIDCUserEncryption(userId, sessionDurationMs);
}
async authenticateOIDCUser(userId: string): Promise<boolean> {
const authenticated = await this.userCrypto.authenticateOIDCUser(userId);
async authenticateOIDCUser(
userId: string,
deviceType?: DeviceType,
): Promise<boolean> {
const sessionDurationMs =
deviceType === "desktop" || deviceType === "mobile"
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
if (authenticated) {
await this.performLazyEncryptionMigration(userId);
}
return authenticated;
}
async authenticateUser(userId: string, password: string): Promise<boolean> {
const authenticated = await this.userCrypto.authenticateUser(
const authenticated = await this.userCrypto.authenticateOIDCUser(
userId,
password,
sessionDurationMs,
);
if (authenticated) {
@@ -80,6 +113,33 @@ class AuthManager {
return authenticated;
}
async authenticateUser(
userId: string,
password: string,
deviceType?: DeviceType,
): Promise<boolean> {
const sessionDurationMs =
deviceType === "desktop" || deviceType === "mobile"
? 30 * 24 * 60 * 60 * 1000
: 7 * 24 * 60 * 60 * 1000;
const authenticated = await this.userCrypto.authenticateUser(
userId,
password,
sessionDurationMs,
);
if (authenticated) {
await this.performLazyEncryptionMigration(userId);
}
return authenticated;
}
async convertToOIDCEncryption(userId: string): Promise<void> {
await this.userCrypto.convertToOIDCEncryption(userId);
}
private async performLazyEncryptionMigration(userId: string): Promise<void> {
try {
const userDataKey = this.getUserDataKey(userId);
@@ -94,9 +154,8 @@ class AuthManager {
return;
}
const { getSqlite, saveMemoryDatabaseToFile } = await import(
"../database/db/index.js"
);
const { getSqlite, saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
const sqlite = getSqlite();
@@ -108,7 +167,23 @@ class AuthManager {
if (migrationResult.migrated) {
await saveMemoryDatabaseToFile();
} else {
}
try {
const { CredentialSystemEncryptionMigration } =
await import("./credential-system-encryption-migration.js");
const credMigration = new CredentialSystemEncryptionMigration();
const credResult = await credMigration.migrateUserCredentials(userId);
if (credResult.migrated > 0) {
await saveMemoryDatabaseToFile();
}
} catch (error) {
databaseLogger.warn("Credential migration failed during login", {
operation: "login_credential_migration_failed",
userId,
error: error instanceof Error ? error.message : "Unknown error",
});
}
} catch (error) {
databaseLogger.error("Lazy encryption migration failed", error, {
@@ -121,50 +196,319 @@ class AuthManager {
async generateJWTToken(
userId: string,
options: { expiresIn?: string; pendingTOTP?: boolean } = {},
options: {
expiresIn?: string;
pendingTOTP?: boolean;
deviceType?: DeviceType;
deviceInfo?: string;
} = {},
): Promise<string> {
const jwtSecret = await this.systemCrypto.getJWTSecret();
let expiresIn = options.expiresIn;
if (!expiresIn && !options.pendingTOTP) {
if (options.deviceType === "desktop" || options.deviceType === "mobile") {
expiresIn = "30d";
} else {
expiresIn = "7d";
}
} else if (!expiresIn) {
expiresIn = "7d";
}
const payload: JWTPayload = { userId };
if (options.pendingTOTP) {
payload.pendingTOTP = true;
}
return jwt.sign(payload, jwtSecret, {
expiresIn: options.expiresIn || "24h",
} as jwt.SignOptions);
if (!options.pendingTOTP && options.deviceType && options.deviceInfo) {
const sessionId = nanoid();
payload.sessionId = sessionId;
const token = jwt.sign(payload, jwtSecret, {
expiresIn,
} as jwt.SignOptions);
const expirationMs = this.parseExpiresIn(expiresIn);
const now = new Date();
const expiresAt = new Date(now.getTime() + expirationMs).toISOString();
const createdAt = now.toISOString();
try {
await db.insert(sessions).values({
id: sessionId,
userId,
jwtToken: token,
deviceType: options.deviceType,
deviceInfo: options.deviceInfo,
createdAt,
expiresAt,
lastActiveAt: createdAt,
});
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
"Failed to save database after session creation",
saveError,
{
operation: "session_create_db_save_failed",
sessionId,
},
);
}
} catch (error) {
databaseLogger.error("Failed to create session", error, {
operation: "session_create_failed",
userId,
sessionId,
});
}
return token;
}
return jwt.sign(payload, jwtSecret, { expiresIn } as jwt.SignOptions);
}
private parseExpiresIn(expiresIn: string): number {
const match = expiresIn.match(/^(\d+)([smhd])$/);
if (!match) return 7 * 24 * 60 * 60 * 1000;
const value = parseInt(match[1]);
const unit = match[2];
switch (unit) {
case "s":
return value * 1000;
case "m":
return value * 60 * 1000;
case "h":
return value * 60 * 60 * 1000;
case "d":
return value * 24 * 60 * 60 * 1000;
default:
return 7 * 24 * 60 * 60 * 1000;
}
}
async verifyJWTToken(token: string): Promise<JWTPayload | null> {
try {
if (this.invalidatedTokens.has(token)) {
return null;
}
const jwtSecret = await this.systemCrypto.getJWTSecret();
const payload = jwt.verify(token, jwtSecret) as JWTPayload;
if (payload.sessionId) {
try {
const sessionRecords = await db
.select()
.from(sessions)
.where(eq(sessions.id, payload.sessionId))
.limit(1);
if (sessionRecords.length === 0) {
databaseLogger.warn("Session not found during JWT verification", {
operation: "jwt_verify_session_not_found",
sessionId: payload.sessionId,
userId: payload.userId,
});
return null;
}
} catch (dbError) {
databaseLogger.error(
"Failed to check session in database during JWT verification",
dbError,
{
operation: "jwt_verify_session_check_failed",
sessionId: payload.sessionId,
},
);
return null;
}
}
return payload;
} catch (error) {
databaseLogger.warn("JWT verification failed", {
operation: "jwt_verify_failed",
error: error instanceof Error ? error.message : "Unknown error",
errorName: error instanceof Error ? error.name : "Unknown",
});
return null;
}
}
invalidateJWTToken(token: string): void {
this.invalidatedTokens.add(token);
invalidateJWTToken(token: string): void {}
invalidateUserTokens(userId: string): void {}
async revokeSession(sessionId: string): Promise<boolean> {
try {
await db.delete(sessions).where(eq(sessions.id, sessionId));
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
"Failed to save database after session revocation",
saveError,
{
operation: "session_revoke_db_save_failed",
sessionId,
},
);
}
return true;
} catch (error) {
databaseLogger.error("Failed to delete session", error, {
operation: "session_delete_failed",
sessionId,
});
return false;
}
}
invalidateUserTokens(userId: string): void {
databaseLogger.info("User tokens invalidated due to data lock", {
operation: "user_tokens_invalidate",
userId,
});
async revokeAllUserSessions(
userId: string,
exceptSessionId?: string,
): Promise<number> {
try {
const userSessions = await db
.select()
.from(sessions)
.where(eq(sessions.userId, userId));
const deletedCount = userSessions.filter(
(s) => !exceptSessionId || s.id !== exceptSessionId,
).length;
if (exceptSessionId) {
await db
.delete(sessions)
.where(
and(
eq(sessions.userId, userId),
sql`${sessions.id} != ${exceptSessionId}`,
),
);
} else {
await db.delete(sessions).where(eq(sessions.userId, userId));
}
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
"Failed to save database after revoking all user sessions",
saveError,
{
operation: "user_sessions_revoke_db_save_failed",
userId,
},
);
}
return deletedCount;
} catch (error) {
databaseLogger.error("Failed to delete user sessions", error, {
operation: "user_sessions_delete_failed",
userId,
});
return 0;
}
}
getSecureCookieOptions(req: any, maxAge: number = 24 * 60 * 60 * 1000) {
async cleanupExpiredSessions(): Promise<number> {
try {
const expiredSessions = await db
.select()
.from(sessions)
.where(sql`${sessions.expiresAt} < datetime('now')`);
const expiredCount = expiredSessions.length;
if (expiredCount === 0) {
return 0;
}
await db
.delete(sessions)
.where(sql`${sessions.expiresAt} < datetime('now')`);
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
"Failed to save database after cleaning up expired sessions",
saveError,
{
operation: "sessions_cleanup_db_save_failed",
},
);
}
const affectedUsers = new Set(expiredSessions.map((s) => s.userId));
for (const userId of affectedUsers) {
const remainingSessions = await db
.select()
.from(sessions)
.where(eq(sessions.userId, userId));
if (remainingSessions.length === 0) {
this.userCrypto.logoutUser(userId);
}
}
return expiredCount;
} catch (error) {
databaseLogger.error("Failed to cleanup expired sessions", error, {
operation: "sessions_cleanup_failed",
});
return 0;
}
}
async getAllSessions(): Promise<any[]> {
try {
const allSessions = await db.select().from(sessions);
return allSessions;
} catch (error) {
databaseLogger.error("Failed to get all sessions", error, {
operation: "sessions_get_all_failed",
});
return [];
}
}
async getUserSessions(userId: string): Promise<any[]> {
try {
const userSessions = await db
.select()
.from(sessions)
.where(eq(sessions.userId, userId));
return userSessions;
} catch (error) {
databaseLogger.error("Failed to get user sessions", error, {
operation: "sessions_get_user_failed",
userId,
});
return [];
}
}
getSecureCookieOptions(
req: RequestWithHeaders,
maxAge: number = 7 * 24 * 60 * 60 * 1000,
) {
return {
httpOnly: false,
secure: req.secure || req.headers["x-forwarded-proto"] === "https",
@@ -176,10 +520,11 @@ class AuthManager {
createAuthMiddleware() {
return async (req: Request, res: Response, next: NextFunction) => {
let token = req.cookies?.jwt;
const authReq = req as AuthenticatedRequest;
let token = authReq.cookies?.jwt;
if (!token) {
const authHeader = req.headers["authorization"];
const authHeader = authReq.headers["authorization"];
if (authHeader?.startsWith("Bearer ")) {
token = authHeader.split(" ")[1];
}
@@ -195,40 +540,141 @@ class AuthManager {
return res.status(401).json({ error: "Invalid token" });
}
(req as any).userId = payload.userId;
(req as any).pendingTOTP = payload.pendingTOTP;
if (payload.sessionId) {
try {
const sessionRecords = await db
.select()
.from(sessions)
.where(eq(sessions.id, payload.sessionId))
.limit(1);
if (sessionRecords.length === 0) {
databaseLogger.warn("Session not found in middleware", {
operation: "middleware_session_not_found",
sessionId: payload.sessionId,
userId: payload.userId,
});
return res.status(401).json({
error: "Session not found",
code: "SESSION_NOT_FOUND",
});
}
const session = sessionRecords[0];
const sessionExpiryTime = new Date(session.expiresAt).getTime();
const currentTime = Date.now();
const isExpired = sessionExpiryTime < currentTime;
if (isExpired) {
databaseLogger.warn("Session has expired", {
operation: "session_expired",
sessionId: payload.sessionId,
expiresAt: session.expiresAt,
expiryTime: sessionExpiryTime,
currentTime: currentTime,
difference: currentTime - sessionExpiryTime,
});
db.delete(sessions)
.where(eq(sessions.id, payload.sessionId))
.then(async () => {
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
const remainingSessions = await db
.select()
.from(sessions)
.where(eq(sessions.userId, payload.userId));
if (remainingSessions.length === 0) {
this.userCrypto.logoutUser(payload.userId);
}
} catch (cleanupError) {
databaseLogger.error(
"Failed to cleanup after expired session",
cleanupError,
{
operation: "expired_session_cleanup_failed",
sessionId: payload.sessionId,
},
);
}
})
.catch((error) => {
databaseLogger.error(
"Failed to delete expired session",
error,
{
operation: "expired_session_delete_failed",
sessionId: payload.sessionId,
},
);
});
return res.status(401).json({
error: "Session has expired",
code: "SESSION_EXPIRED",
});
}
db.update(sessions)
.set({ lastActiveAt: new Date().toISOString() })
.where(eq(sessions.id, payload.sessionId))
.then(() => {})
.catch((error) => {
databaseLogger.warn("Failed to update session lastActiveAt", {
operation: "session_update_last_active",
sessionId: payload.sessionId,
error: error instanceof Error ? error.message : "Unknown error",
});
});
} catch (error) {
databaseLogger.error("Session check failed in middleware", error, {
operation: "middleware_session_check_failed",
sessionId: payload.sessionId,
});
return res.status(500).json({ error: "Session check failed" });
}
}
authReq.userId = payload.userId;
authReq.pendingTOTP = payload.pendingTOTP;
next();
};
}
createDataAccessMiddleware() {
return async (req: Request, res: Response, next: NextFunction) => {
const userId = (req as any).userId;
const authReq = req as AuthenticatedRequest;
const userId = authReq.userId;
if (!userId) {
return res.status(401).json({ error: "Authentication required" });
}
const dataKey = this.userCrypto.getUserDataKey(userId);
if (!dataKey) {
return res.status(401).json({
error: "Session expired - please log in again",
code: "SESSION_EXPIRED",
});
}
(req as any).dataKey = dataKey;
authReq.dataKey = dataKey || undefined;
next();
};
}
createAdminMiddleware() {
return async (req: Request, res: Response, next: NextFunction) => {
const authHeader = req.headers["authorization"];
if (!authHeader?.startsWith("Bearer ")) {
return res.status(401).json({ error: "Missing Authorization header" });
let token = req.cookies?.jwt;
if (!token) {
const authHeader = req.headers["authorization"];
if (authHeader?.startsWith("Bearer ")) {
token = authHeader.split(" ")[1];
}
}
if (!token) {
return res.status(401).json({ error: "Missing authentication token" });
}
const token = authHeader.split(" ")[1];
const payload = await this.verifyJWTToken(token);
if (!payload) {
@@ -257,8 +703,9 @@ class AuthManager {
return res.status(403).json({ error: "Admin access required" });
}
(req as any).userId = payload.userId;
(req as any).pendingTOTP = payload.pendingTOTP;
const authReq = req as AuthenticatedRequest;
authReq.userId = payload.userId;
authReq.pendingTOTP = payload.pendingTOTP;
next();
} catch (error) {
databaseLogger.error("Failed to verify admin privileges", error, {
@@ -272,8 +719,46 @@ class AuthManager {
};
}
logoutUser(userId: string): void {
this.userCrypto.logoutUser(userId);
async logoutUser(userId: string, sessionId?: string): Promise<void> {
if (sessionId) {
try {
await db.delete(sessions).where(eq(sessions.id, sessionId));
try {
const { saveMemoryDatabaseToFile } =
await import("../database/db/index.js");
await saveMemoryDatabaseToFile();
} catch (saveError) {
databaseLogger.error(
"Failed to save database after logout",
saveError,
{
operation: "logout_db_save_failed",
userId,
sessionId,
},
);
}
const remainingSessions = await db
.select()
.from(sessions)
.where(eq(sessions.userId, userId));
if (remainingSessions.length === 0) {
this.userCrypto.logoutUser(userId);
} else {
}
} catch (error) {
databaseLogger.error("Failed to delete session on logout", error, {
operation: "session_delete_logout_failed",
userId,
sessionId,
});
}
} else {
this.userCrypto.logoutUser(userId);
}
}
getUserDataKey(userId: string): Buffer | null {

View File

@@ -1,7 +1,6 @@
import { execSync } from "child_process";
import { promises as fs } from "fs";
import path from "path";
import crypto from "crypto";
import { systemLogger } from "./logger.js";
export class AutoSSLSetup {
@@ -102,7 +101,7 @@ export class AutoSSLSetup {
try {
try {
execSync("openssl version", { stdio: "pipe" });
} catch (error) {
} catch {
throw new Error(
"OpenSSL is not installed or not available in PATH. Please install OpenSSL to enable SSL certificate generation.",
);
@@ -234,7 +233,7 @@ IP.3 = 0.0.0.0
let envContent = "";
try {
envContent = await fs.readFile(this.ENV_FILE, "utf8");
} catch {}
} catch (error) {}
let updatedContent = envContent;
let hasChanges = false;

View File

@@ -0,0 +1,131 @@
import { db } from "../database/db/index.js";
import { sshCredentials } from "../database/db/schema.js";
import { eq, and, or, isNull } from "drizzle-orm";
import { DataCrypto } from "./data-crypto.js";
import { SystemCrypto } from "./system-crypto.js";
import { FieldCrypto } from "./field-crypto.js";
import { databaseLogger } from "./logger.js";
export class CredentialSystemEncryptionMigration {
async migrateUserCredentials(userId: string): Promise<{
migrated: number;
failed: number;
skipped: number;
}> {
try {
const userDEK = DataCrypto.getUserDataKey(userId);
if (!userDEK) {
throw new Error("User must be logged in to migrate credentials");
}
const systemCrypto = SystemCrypto.getInstance();
const CSKEK = await systemCrypto.getCredentialSharingKey();
const credentials = await db
.select()
.from(sshCredentials)
.where(
and(
eq(sshCredentials.userId, userId),
or(
isNull(sshCredentials.systemPassword),
isNull(sshCredentials.systemKey),
isNull(sshCredentials.systemKeyPassword),
),
),
);
let migrated = 0;
let failed = 0;
const skipped = 0;
for (const cred of credentials) {
try {
const plainPassword = cred.password
? FieldCrypto.decryptField(
cred.password,
userDEK,
cred.id.toString(),
"password",
)
: null;
const plainKey = cred.key
? FieldCrypto.decryptField(
cred.key,
userDEK,
cred.id.toString(),
"key",
)
: null;
const plainKeyPassword = cred.key_password
? FieldCrypto.decryptField(
cred.key_password,
userDEK,
cred.id.toString(),
"key_password",
)
: null;
const systemPassword = plainPassword
? FieldCrypto.encryptField(
plainPassword,
CSKEK,
cred.id.toString(),
"password",
)
: null;
const systemKey = plainKey
? FieldCrypto.encryptField(
plainKey,
CSKEK,
cred.id.toString(),
"key",
)
: null;
const systemKeyPassword = plainKeyPassword
? FieldCrypto.encryptField(
plainKeyPassword,
CSKEK,
cred.id.toString(),
"key_password",
)
: null;
await db
.update(sshCredentials)
.set({
systemPassword,
systemKey,
systemKeyPassword,
updatedAt: new Date().toISOString(),
})
.where(eq(sshCredentials.id, cred.id));
migrated++;
} catch (error) {
databaseLogger.error("Failed to migrate credential", error, {
credentialId: cred.id,
userId,
});
failed++;
}
}
return { migrated, failed, skipped };
} catch (error) {
databaseLogger.error(
"Credential system encryption migration failed",
error,
{
operation: "credential_migration_failed",
userId,
error: error instanceof Error ? error.message : "Unknown error",
},
);
throw error;
}
}
}

View File

@@ -3,6 +3,19 @@ import { LazyFieldEncryption } from "./lazy-field-encryption.js";
import { UserCrypto } from "./user-crypto.js";
import { databaseLogger } from "./logger.js";
interface DatabaseInstance {
prepare: (sql: string) => {
all: (param?: unknown) => DatabaseRecord[];
get: (param?: unknown) => DatabaseRecord;
run: (...params: unknown[]) => unknown;
};
}
interface DatabaseRecord {
id: number | string;
[key: string]: unknown;
}
class DataCrypto {
private static userCrypto: UserCrypto;
@@ -10,13 +23,13 @@ class DataCrypto {
this.userCrypto = UserCrypto.getInstance();
}
static encryptRecord(
static encryptRecord<T extends Record<string, unknown>>(
tableName: string,
record: any,
record: T,
userId: string,
userDataKey: Buffer,
): any {
const encryptedRecord = { ...record };
): T {
const encryptedRecord: Record<string, unknown> = { ...record };
const recordId = record.id || "temp-" + Date.now();
for (const [fieldName, value] of Object.entries(record)) {
@@ -24,24 +37,24 @@ class DataCrypto {
encryptedRecord[fieldName] = FieldCrypto.encryptField(
value as string,
userDataKey,
recordId,
recordId as string,
fieldName,
);
}
}
return encryptedRecord;
return encryptedRecord as T;
}
static decryptRecord(
static decryptRecord<T extends Record<string, unknown>>(
tableName: string,
record: any,
record: T,
userId: string,
userDataKey: Buffer,
): any {
): T {
if (!record) return record;
const decryptedRecord = { ...record };
const decryptedRecord: Record<string, unknown> = { ...record };
const recordId = record.id;
for (const [fieldName, value] of Object.entries(record)) {
@@ -49,21 +62,21 @@ class DataCrypto {
decryptedRecord[fieldName] = LazyFieldEncryption.safeGetFieldValue(
value as string,
userDataKey,
recordId,
recordId as string,
fieldName,
);
}
}
return decryptedRecord;
return decryptedRecord as T;
}
static decryptRecords(
static decryptRecords<T extends Record<string, unknown>>(
tableName: string,
records: any[],
records: T[],
userId: string,
userDataKey: Buffer,
): any[] {
): T[] {
if (!Array.isArray(records)) return records;
return records.map((record) =>
this.decryptRecord(tableName, record, userId, userDataKey),
@@ -73,7 +86,7 @@ class DataCrypto {
static async migrateUserSensitiveFields(
userId: string,
userDataKey: Buffer,
db: any,
db: DatabaseInstance,
): Promise<{
migrated: boolean;
migratedTables: string[];
@@ -84,7 +97,7 @@ class DataCrypto {
let migratedFieldsCount = 0;
try {
const { needsMigration, plaintextFields } =
const { needsMigration } =
await LazyFieldEncryption.checkUserNeedsMigration(
userId,
userDataKey,
@@ -97,7 +110,7 @@ class DataCrypto {
const sshDataRecords = db
.prepare("SELECT * FROM ssh_data WHERE user_id = ?")
.all(userId);
.all(userId) as DatabaseRecord[];
for (const record of sshDataRecords) {
const sensitiveFields =
LazyFieldEncryption.getSensitiveFieldsForTable("ssh_data");
@@ -112,13 +125,17 @@ class DataCrypto {
if (needsUpdate) {
const updateQuery = `
UPDATE ssh_data
SET password = ?, key = ?, key_password = ?, updated_at = CURRENT_TIMESTAMP
SET password = ?, key = ?, key_password = ?, key_type = ?, autostart_password = ?, autostart_key = ?, autostart_key_password = ?, updated_at = CURRENT_TIMESTAMP
WHERE id = ?
`;
db.prepare(updateQuery).run(
updatedRecord.password || null,
updatedRecord.key || null,
updatedRecord.key_password || null,
updatedRecord.key_password || updatedRecord.keyPassword || null,
updatedRecord.keyType || null,
updatedRecord.autostartPassword || null,
updatedRecord.autostartKey || null,
updatedRecord.autostartKeyPassword || null,
record.id,
);
@@ -132,7 +149,7 @@ class DataCrypto {
const sshCredentialsRecords = db
.prepare("SELECT * FROM ssh_credentials WHERE user_id = ?")
.all(userId);
.all(userId) as DatabaseRecord[];
for (const record of sshCredentialsRecords) {
const sensitiveFields =
LazyFieldEncryption.getSensitiveFieldsForTable("ssh_credentials");
@@ -147,15 +164,16 @@ class DataCrypto {
if (needsUpdate) {
const updateQuery = `
UPDATE ssh_credentials
SET password = ?, key = ?, key_password = ?, private_key = ?, public_key = ?, updated_at = CURRENT_TIMESTAMP
SET password = ?, key = ?, key_password = ?, private_key = ?, public_key = ?, key_type = ?, updated_at = CURRENT_TIMESTAMP
WHERE id = ?
`;
db.prepare(updateQuery).run(
updatedRecord.password || null,
updatedRecord.key || null,
updatedRecord.key_password || null,
updatedRecord.private_key || null,
updatedRecord.public_key || null,
updatedRecord.key_password || updatedRecord.keyPassword || null,
updatedRecord.private_key || updatedRecord.privateKey || null,
updatedRecord.public_key || updatedRecord.publicKey || null,
updatedRecord.keyType || null,
record.id,
);
@@ -169,7 +187,7 @@ class DataCrypto {
const userRecord = db
.prepare("SELECT * FROM users WHERE id = ?")
.get(userId);
.get(userId) as DatabaseRecord | undefined;
if (userRecord) {
const sensitiveFields =
LazyFieldEncryption.getSensitiveFieldsForTable("users");
@@ -184,12 +202,18 @@ class DataCrypto {
if (needsUpdate) {
const updateQuery = `
UPDATE users
SET totp_secret = ?, totp_backup_codes = ?
SET totp_secret = ?, totp_backup_codes = ?, client_secret = ?, oidc_identifier = ?
WHERE id = ?
`;
db.prepare(updateQuery).run(
updatedRecord.totp_secret || null,
updatedRecord.totp_backup_codes || null,
updatedRecord.totp_secret || updatedRecord.totpSecret || null,
updatedRecord.totp_backup_codes ||
updatedRecord.totpBackupCodes ||
null,
updatedRecord.client_secret || updatedRecord.clientSecret || null,
updatedRecord.oidc_identifier ||
updatedRecord.oidcIdentifier ||
null,
userId,
);
@@ -220,7 +244,7 @@ class DataCrypto {
static async reencryptUserDataAfterPasswordReset(
userId: string,
newUserDataKey: Buffer,
db: any,
db: DatabaseInstance,
): Promise<{
success: boolean;
reencryptedTables: string[];
@@ -236,24 +260,44 @@ class DataCrypto {
try {
const tablesToReencrypt = [
{ table: "ssh_data", fields: ["password", "key", "key_password"] },
{
table: "ssh_data",
fields: [
"password",
"key",
"key_password",
"keyPassword",
"keyType",
"autostartPassword",
"autostartKey",
"autostartKeyPassword",
],
},
{
table: "ssh_credentials",
fields: [
"password",
"private_key",
"privateKey",
"key_password",
"keyPassword",
"key",
"public_key",
"publicKey",
"keyType",
],
},
{
table: "users",
fields: [
"client_secret",
"clientSecret",
"totp_secret",
"totpSecret",
"totp_backup_codes",
"totpBackupCodes",
"oidc_identifier",
"oidcIdentifier",
],
},
];
@@ -262,17 +306,21 @@ class DataCrypto {
try {
const records = db
.prepare(`SELECT * FROM ${table} WHERE user_id = ?`)
.all(userId);
.all(userId) as DatabaseRecord[];
for (const record of records) {
const recordId = record.id.toString();
const updatedRecord: DatabaseRecord = { ...record };
let needsUpdate = false;
const updatedRecord = { ...record };
for (const fieldName of fields) {
const fieldValue = record[fieldName];
if (fieldValue && fieldValue.trim() !== "") {
if (
fieldValue &&
typeof fieldValue === "string" &&
fieldValue.trim() !== ""
) {
try {
const reencryptedValue = FieldCrypto.encryptField(
fieldValue,
@@ -345,18 +393,6 @@ class DataCrypto {
result.success = result.errors.length === 0;
databaseLogger.info(
"User data re-encryption completed after password reset",
{
operation: "password_reset_reencrypt_completed",
userId,
success: result.success,
reencryptedTables: result.reencryptedTables,
reencryptedFieldsCount: result.reencryptedFieldsCount,
errorsCount: result.errors.length,
},
);
return result;
} catch (error) {
databaseLogger.error(
@@ -384,29 +420,29 @@ class DataCrypto {
return userDataKey;
}
static encryptRecordForUser(
static encryptRecordForUser<T extends Record<string, unknown>>(
tableName: string,
record: any,
record: T,
userId: string,
): any {
): T {
const userDataKey = this.validateUserAccess(userId);
return this.encryptRecord(tableName, record, userId, userDataKey);
}
static decryptRecordForUser(
static decryptRecordForUser<T extends Record<string, unknown>>(
tableName: string,
record: any,
record: T,
userId: string,
): any {
): T {
const userDataKey = this.validateUserAccess(userId);
return this.decryptRecord(tableName, record, userId, userDataKey);
}
static decryptRecordsForUser(
static decryptRecordsForUser<T extends Record<string, unknown>>(
tableName: string,
records: any[],
records: T[],
userId: string,
): any[] {
): T[] {
const userDataKey = this.validateUserAccess(userId);
return this.decryptRecords(tableName, records, userId, userDataKey);
}
@@ -435,10 +471,56 @@ class DataCrypto {
);
return decrypted === testData;
} catch (error) {
} catch {
return false;
}
}
/**
* Encrypt sensitive credential fields with system key for offline sharing
* Returns an object with systemPassword, systemKey, systemKeyPassword fields
*/
static async encryptRecordWithSystemKey<T extends Record<string, unknown>>(
tableName: string,
record: T,
systemKey: Buffer,
): Promise<Partial<T>> {
const systemEncrypted: Record<string, unknown> = {};
const recordId = record.id || "temp-" + Date.now();
if (tableName !== "ssh_credentials") {
return systemEncrypted as Partial<T>;
}
if (record.password && typeof record.password === "string") {
systemEncrypted.systemPassword = FieldCrypto.encryptField(
record.password as string,
systemKey,
recordId as string,
"password",
);
}
if (record.key && typeof record.key === "string") {
systemEncrypted.systemKey = FieldCrypto.encryptField(
record.key as string,
systemKey,
recordId as string,
"key",
);
}
if (record.key_password && typeof record.key_password === "string") {
systemEncrypted.systemKeyPassword = FieldCrypto.encryptField(
record.key_password as string,
systemKey,
recordId as string,
"key_password",
);
}
return systemEncrypted as Partial<T>;
}
}
export { DataCrypto };

View File

@@ -12,6 +12,7 @@ interface EncryptedFileMetadata {
algorithm: string;
keySource?: string;
salt?: string;
dataSize?: number;
}
class DatabaseFileEncryption {
@@ -25,12 +26,17 @@ class DatabaseFileEncryption {
buffer: Buffer,
targetPath: string,
): Promise<string> {
const tmpPath = `${targetPath}.tmp-${Date.now()}-${process.pid}`;
const metadataPath = `${targetPath}${this.METADATA_FILE_SUFFIX}`;
try {
const key = await this.systemCrypto.getDatabaseKey();
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv(this.ALGORITHM, key, iv) as any;
const cipher = crypto.createCipheriv(
this.ALGORITHM,
key,
iv,
) as crypto.CipherGCM;
const encrypted = Buffer.concat([cipher.update(buffer), cipher.final()]);
const tag = cipher.getAuthTag();
@@ -41,14 +47,55 @@ class DatabaseFileEncryption {
fingerprint: "termix-v2-systemcrypto",
algorithm: this.ALGORITHM,
keySource: "SystemCrypto",
dataSize: encrypted.length,
};
const metadataPath = `${targetPath}${this.METADATA_FILE_SUFFIX}`;
fs.writeFileSync(targetPath, encrypted);
fs.writeFileSync(metadataPath, JSON.stringify(metadata, null, 2));
const metadataJson = JSON.stringify(metadata, null, 2);
const metadataBuffer = Buffer.from(metadataJson, "utf8");
const metadataLengthBuffer = Buffer.alloc(4);
metadataLengthBuffer.writeUInt32BE(metadataBuffer.length, 0);
const finalBuffer = Buffer.concat([
metadataLengthBuffer,
metadataBuffer,
encrypted,
]);
fs.writeFileSync(tmpPath, finalBuffer);
fs.renameSync(tmpPath, targetPath);
try {
if (fs.existsSync(metadataPath)) {
fs.unlinkSync(metadataPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup old metadata file", {
operation: "old_meta_cleanup_failed",
path: metadataPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
return targetPath;
} catch (error) {
try {
if (fs.existsSync(tmpPath)) {
fs.unlinkSync(tmpPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup temporary files", {
operation: "temp_file_cleanup_failed",
tmpPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
databaseLogger.error("Failed to encrypt database buffer", error, {
operation: "database_buffer_encryption_failed",
targetPath,
@@ -70,6 +117,8 @@ class DatabaseFileEncryption {
const encryptedPath =
targetPath || `${sourcePath}${this.ENCRYPTED_FILE_SUFFIX}`;
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
const tmpPath = `${encryptedPath}.tmp-${Date.now()}-${process.pid}`;
const tmpMetadataPath = `${tmpPath}${this.METADATA_FILE_SUFFIX}`;
try {
const sourceData = fs.readFileSync(sourcePath);
@@ -78,13 +127,23 @@ class DatabaseFileEncryption {
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv(this.ALGORITHM, key, iv) as any;
const cipher = crypto.createCipheriv(
this.ALGORITHM,
key,
iv,
) as crypto.CipherGCM;
const encrypted = Buffer.concat([
cipher.update(sourceData),
cipher.final(),
]);
const tag = cipher.getAuthTag();
const keyFingerprint = crypto
.createHash("sha256")
.update(key)
.digest("hex")
.substring(0, 16);
const metadata: EncryptedFileMetadata = {
iv: iv.toString("hex"),
tag: tag.toString("hex"),
@@ -92,10 +151,14 @@ class DatabaseFileEncryption {
fingerprint: "termix-v2-systemcrypto",
algorithm: this.ALGORITHM,
keySource: "SystemCrypto",
dataSize: encrypted.length,
};
fs.writeFileSync(encryptedPath, encrypted);
fs.writeFileSync(metadataPath, JSON.stringify(metadata, null, 2));
fs.writeFileSync(tmpPath, encrypted);
fs.writeFileSync(tmpMetadataPath, JSON.stringify(metadata, null, 2));
fs.renameSync(tmpPath, encryptedPath);
fs.renameSync(tmpMetadataPath, metadataPath);
databaseLogger.info("Database file encrypted successfully", {
operation: "database_file_encryption",
@@ -103,11 +166,30 @@ class DatabaseFileEncryption {
encryptedPath,
fileSize: sourceData.length,
encryptedSize: encrypted.length,
keyFingerprint,
fingerprintPrefix: metadata.fingerprint,
});
return encryptedPath;
} catch (error) {
try {
if (fs.existsSync(tmpPath)) {
fs.unlinkSync(tmpPath);
}
if (fs.existsSync(tmpMetadataPath)) {
fs.unlinkSync(tmpMetadataPath);
}
} catch (cleanupError) {
databaseLogger.warn("Failed to cleanup temporary files", {
operation: "temp_file_cleanup_failed",
tmpPath,
error:
cleanupError instanceof Error
? cleanupError.message
: "Unknown error",
});
}
databaseLogger.error("Failed to encrypt database file", error, {
operation: "database_file_encryption_failed",
sourcePath,
@@ -126,16 +208,69 @@ class DatabaseFileEncryption {
);
}
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(metadataPath)) {
throw new Error(`Metadata file does not exist: ${metadataPath}`);
let metadata: EncryptedFileMetadata;
let encryptedData: Buffer;
const fileBuffer = fs.readFileSync(encryptedPath);
try {
const metadataLength = fileBuffer.readUInt32BE(0);
const metadataEnd = 4 + metadataLength;
if (
metadataLength <= 0 ||
metadataEnd > fileBuffer.length ||
metadataEnd <= 4
) {
throw new Error("Invalid metadata length in single-file format");
}
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
metadata = JSON.parse(metadataJson);
encryptedData = fileBuffer.slice(metadataEnd);
if (!metadata.iv || !metadata.tag || !metadata.version) {
throw new Error("Invalid metadata structure in single-file format");
}
} catch (singleFileError) {
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(metadataPath)) {
throw new Error(
`Could not read database: Not a valid single-file format and metadata file is missing: ${metadataPath}. Error: ${singleFileError.message}`,
);
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
metadata = JSON.parse(metadataContent);
encryptedData = fileBuffer;
} catch (twoFileError) {
throw new Error(
`Failed to read database using both single-file and two-file formats. Error: ${twoFileError.message}`,
);
}
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
const encryptedData = fs.readFileSync(encryptedPath);
if (
metadata.dataSize !== undefined &&
encryptedData.length !== metadata.dataSize
) {
databaseLogger.error(
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
null,
{
operation: "database_file_size_mismatch",
encryptedPath,
actualSize: encryptedData.length,
expectedSize: metadata.dataSize,
},
);
throw new Error(
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
`This indicates corrupted files or interrupted write operation.`,
);
}
let key: Buffer;
if (metadata.version === "v2") {
@@ -163,7 +298,7 @@ class DatabaseFileEncryption {
metadata.algorithm,
key,
Buffer.from(metadata.iv, "hex"),
) as any;
) as crypto.DecipherGCM;
decipher.setAuthTag(Buffer.from(metadata.tag, "hex"));
const decryptedBuffer = Buffer.concat([
@@ -173,13 +308,63 @@ class DatabaseFileEncryption {
return decryptedBuffer;
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const isAuthError =
errorMessage.includes("Unsupported state") ||
errorMessage.includes("authenticate data") ||
errorMessage.includes("auth");
if (isAuthError) {
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
let envFileExists = false;
let envFileReadable = false;
try {
envFileExists = fs.existsSync(envPath);
if (envFileExists) {
fs.accessSync(envPath, fs.constants.R_OK);
envFileReadable = true;
}
} catch (error) {}
databaseLogger.error(
"Database decryption authentication failed - possible causes: wrong DATABASE_KEY, corrupted files, or interrupted write",
error,
{
operation: "database_buffer_decryption_auth_failed",
encryptedPath,
dataDir,
envPath,
envFileExists,
envFileReadable,
hasEnvKey: !!process.env.DATABASE_KEY,
envKeyLength: process.env.DATABASE_KEY?.length || 0,
suggestion:
"Check if DATABASE_KEY in .env matches the key used for encryption",
},
);
throw new Error(
`Database decryption authentication failed. This usually means:\n` +
`1. DATABASE_KEY has changed or is missing from ${dataDir}/.env\n` +
`2. Encrypted file was corrupted during write (system crash/restart)\n` +
`3. Metadata file does not match encrypted data\n` +
`\nDebug info:\n` +
`- DATA_DIR: ${dataDir}\n` +
`- .env file exists: ${envFileExists}\n` +
`- .env file readable: ${envFileReadable}\n` +
`- DATABASE_KEY in environment: ${!!process.env.DATABASE_KEY}\n` +
`Original error: ${errorMessage}`,
);
}
databaseLogger.error("Failed to decrypt database to buffer", error, {
operation: "database_buffer_decryption_failed",
encryptedPath,
errorMessage,
});
throw new Error(
`Database buffer decryption failed: ${error instanceof Error ? error.message : "Unknown error"}`,
);
throw new Error(`Database buffer decryption failed: ${errorMessage}`);
}
}
@@ -207,6 +392,26 @@ class DatabaseFileEncryption {
const encryptedData = fs.readFileSync(encryptedPath);
if (
metadata.dataSize !== undefined &&
encryptedData.length !== metadata.dataSize
) {
databaseLogger.error(
"Encrypted file size mismatch - possible corrupted write or mismatched metadata",
null,
{
operation: "database_file_size_mismatch",
encryptedPath,
actualSize: encryptedData.length,
expectedSize: metadata.dataSize,
},
);
throw new Error(
`Encrypted file size mismatch: expected ${metadata.dataSize} bytes but got ${encryptedData.length} bytes. ` +
`This indicates corrupted files or interrupted write operation.`,
);
}
let key: Buffer;
if (metadata.version === "v2") {
key = await this.systemCrypto.getDatabaseKey();
@@ -233,7 +438,7 @@ class DatabaseFileEncryption {
metadata.algorithm,
key,
Buffer.from(metadata.iv, "hex"),
) as any;
) as crypto.DecipherGCM;
decipher.setAuthTag(Buffer.from(metadata.tag, "hex"));
const decrypted = Buffer.concat([
@@ -266,18 +471,43 @@ class DatabaseFileEncryption {
}
static isEncryptedDatabaseFile(filePath: string): boolean {
const metadataPath = `${filePath}${this.METADATA_FILE_SUFFIX}`;
if (!fs.existsSync(filePath) || !fs.existsSync(metadataPath)) {
if (!fs.existsSync(filePath)) {
return false;
}
const metadataPath = `${filePath}${this.METADATA_FILE_SUFFIX}`;
if (fs.existsSync(metadataPath)) {
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
return (
metadata.version === this.VERSION &&
metadata.algorithm === this.ALGORITHM
);
} catch {
return false;
}
}
try {
const metadataContent = fs.readFileSync(metadataPath, "utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
const fileBuffer = fs.readFileSync(filePath);
if (fileBuffer.length < 4) return false;
const metadataLength = fileBuffer.readUInt32BE(0);
const metadataEnd = 4 + metadataLength;
if (metadataLength <= 0 || metadataEnd > fileBuffer.length) {
return false;
}
const metadataJson = fileBuffer.slice(4, metadataEnd).toString("utf8");
const metadata: EncryptedFileMetadata = JSON.parse(metadataJson);
return (
metadata.version === this.VERSION &&
metadata.algorithm === this.ALGORITHM
metadata.algorithm === this.ALGORITHM &&
!!metadata.iv &&
!!metadata.tag
);
} catch {
return false;
@@ -301,7 +531,6 @@ class DatabaseFileEncryption {
const metadata: EncryptedFileMetadata = JSON.parse(metadataContent);
const fileStats = fs.statSync(encryptedPath);
const currentFingerprint = "termix-v1-file";
return {
version: metadata.version,
@@ -315,6 +544,125 @@ class DatabaseFileEncryption {
}
}
static getDiagnosticInfo(encryptedPath: string): {
dataFile: {
exists: boolean;
size?: number;
mtime?: string;
readable?: boolean;
};
metadataFile: {
exists: boolean;
size?: number;
mtime?: string;
readable?: boolean;
content?: EncryptedFileMetadata;
};
environment: {
dataDir: string;
envPath: string;
envFileExists: boolean;
envFileReadable: boolean;
hasEnvKey: boolean;
envKeyLength: number;
};
validation: {
filesConsistent: boolean;
sizeMismatch?: boolean;
expectedSize?: number;
actualSize?: number;
};
} {
const metadataPath = `${encryptedPath}${this.METADATA_FILE_SUFFIX}`;
const dataDir = process.env.DATA_DIR || "./db/data";
const envPath = path.join(dataDir, ".env");
const result: ReturnType<typeof this.getDiagnosticInfo> = {
dataFile: { exists: false },
metadataFile: { exists: false },
environment: {
dataDir,
envPath,
envFileExists: false,
envFileReadable: false,
hasEnvKey: !!process.env.DATABASE_KEY,
envKeyLength: process.env.DATABASE_KEY?.length || 0,
},
validation: {
filesConsistent: false,
},
};
try {
result.dataFile.exists = fs.existsSync(encryptedPath);
if (result.dataFile.exists) {
try {
fs.accessSync(encryptedPath, fs.constants.R_OK);
result.dataFile.readable = true;
const stats = fs.statSync(encryptedPath);
result.dataFile.size = stats.size;
result.dataFile.mtime = stats.mtime.toISOString();
} catch {
result.dataFile.readable = false;
}
}
result.metadataFile.exists = fs.existsSync(metadataPath);
if (result.metadataFile.exists) {
try {
fs.accessSync(metadataPath, fs.constants.R_OK);
result.metadataFile.readable = true;
const stats = fs.statSync(metadataPath);
result.metadataFile.size = stats.size;
result.metadataFile.mtime = stats.mtime.toISOString();
const content = fs.readFileSync(metadataPath, "utf8");
result.metadataFile.content = JSON.parse(content);
} catch {
result.metadataFile.readable = false;
}
}
result.environment.envFileExists = fs.existsSync(envPath);
if (result.environment.envFileExists) {
try {
fs.accessSync(envPath, fs.constants.R_OK);
result.environment.envFileReadable = true;
} catch (error) {}
}
if (
result.dataFile.exists &&
result.metadataFile.exists &&
result.metadataFile.content
) {
result.validation.filesConsistent = true;
if (result.metadataFile.content.dataSize !== undefined) {
result.validation.expectedSize = result.metadataFile.content.dataSize;
result.validation.actualSize = result.dataFile.size;
result.validation.sizeMismatch =
result.metadataFile.content.dataSize !== result.dataFile.size;
if (result.validation.sizeMismatch) {
result.validation.filesConsistent = false;
}
}
}
} catch (error) {
databaseLogger.error("Failed to generate diagnostic info", error, {
operation: "diagnostic_info_failed",
encryptedPath,
});
}
databaseLogger.info("Database encryption diagnostic info", {
operation: "diagnostic_info_generated",
...result,
});
return result;
}
static async createEncryptedBackup(
databasePath: string,
backupDir: string,

View File

@@ -55,7 +55,6 @@ export class DatabaseMigration {
if (hasEncryptedDb && hasUnencryptedDb) {
const unencryptedSize = fs.statSync(this.unencryptedDbPath).size;
const encryptedSize = fs.statSync(this.encryptedDbPath).size;
if (unencryptedSize === 0) {
needsMigration = false;
@@ -63,10 +62,6 @@ export class DatabaseMigration {
"Empty unencrypted database found alongside encrypted database. Removing empty file.";
try {
fs.unlinkSync(this.unencryptedDbPath);
databaseLogger.info("Removed empty unencrypted database file", {
operation: "migration_cleanup_empty",
path: this.unencryptedDbPath,
});
} catch (error) {
databaseLogger.warn("Failed to remove empty unencrypted database", {
operation: "migration_cleanup_empty_failed",
@@ -168,9 +163,6 @@ export class DatabaseMigration {
return false;
}
let totalOriginalRows = 0;
let totalMemoryRows = 0;
for (const table of originalTables) {
const originalCount = originalDb
.prepare(`SELECT COUNT(*) as count FROM ${table.name}`)
@@ -179,9 +171,6 @@ export class DatabaseMigration {
.prepare(`SELECT COUNT(*) as count FROM ${table.name}`)
.get() as { count: number };
totalOriginalRows += originalCount.count;
totalMemoryRows += memoryCount.count;
if (originalCount.count !== memoryCount.count) {
databaseLogger.error(
"Row count mismatch for table during migration verification",
@@ -241,7 +230,9 @@ export class DatabaseMigration {
memoryDb.exec("PRAGMA foreign_keys = OFF");
for (const table of tables) {
const rows = originalDb.prepare(`SELECT * FROM ${table.name}`).all();
const rows = originalDb
.prepare(`SELECT * FROM ${table.name}`)
.all() as Record<string, unknown>[];
if (rows.length > 0) {
const columns = Object.keys(rows[0]);
@@ -251,7 +242,7 @@ export class DatabaseMigration {
);
const insertTransaction = memoryDb.transaction(
(dataRows: any[]) => {
(dataRows: Record<string, unknown>[]) => {
for (const row of dataRows) {
const values = columns.map((col) => row[col]);
insertStmt.run(values);

View File

@@ -71,11 +71,6 @@ export class DatabaseSaveTrigger {
this.pendingSave = true;
try {
databaseLogger.info("Force saving database", {
operation: "db_save_trigger_force_start",
reason,
});
await this.saveFunction();
} catch (error) {
databaseLogger.error("Database force save failed", error, {
@@ -110,9 +105,5 @@ export class DatabaseSaveTrigger {
this.pendingSave = false;
this.isInitialized = false;
this.saveFunction = null;
databaseLogger.info("Database save trigger cleaned up", {
operation: "db_save_trigger_cleanup",
});
}
}

View File

@@ -17,18 +17,36 @@ class FieldCrypto {
private static readonly ENCRYPTED_FIELDS = {
users: new Set([
"password_hash",
"passwordHash",
"client_secret",
"clientSecret",
"totp_secret",
"totpSecret",
"totp_backup_codes",
"totpBackupCodes",
"oidc_identifier",
"oidcIdentifier",
]),
ssh_data: new Set([
"password",
"key",
"key_password",
"keyPassword",
"keyType",
"autostartPassword",
"autostartKey",
"autostartKeyPassword",
]),
ssh_data: new Set(["password", "key", "key_password"]),
ssh_credentials: new Set([
"password",
"private_key",
"privateKey",
"key_password",
"keyPassword",
"key",
"public_key",
"publicKey",
"keyType",
]),
};
@@ -47,7 +65,11 @@ class FieldCrypto {
);
const iv = crypto.randomBytes(this.IV_LENGTH);
const cipher = crypto.createCipheriv(this.ALGORITHM, fieldKey, iv) as any;
const cipher = crypto.createCipheriv(
this.ALGORITHM,
fieldKey,
iv,
) as crypto.CipherGCM;
let encrypted = cipher.update(plaintext, "utf8", "hex");
encrypted += cipher.final("hex");
@@ -89,7 +111,7 @@ class FieldCrypto {
this.ALGORITHM,
fieldKey,
Buffer.from(encrypted.iv, "hex"),
) as any;
) as crypto.DecipherGCM;
decipher.setAuthTag(Buffer.from(encrypted.tag, "hex"));
let decrypted = decipher.update(encrypted.data, "hex", "utf8");

View File

@@ -1,6 +1,14 @@
import { FieldCrypto } from "./field-crypto.js";
import { databaseLogger } from "./logger.js";
interface DatabaseInstance {
prepare: (sql: string) => {
all: (param?: unknown) => unknown[];
get: (param?: unknown) => unknown;
run: (...params: unknown[]) => unknown;
};
}
export class LazyFieldEncryption {
private static readonly LEGACY_FIELD_NAME_MAP: Record<string, string> = {
key_password: "keyPassword",
@@ -39,7 +47,7 @@ export class LazyFieldEncryption {
return false;
}
return true;
} catch (jsonError) {
} catch {
return true;
}
}
@@ -74,7 +82,7 @@ export class LazyFieldEncryption {
legacyFieldName,
);
return decrypted;
} catch (legacyError) {}
} catch (error) {}
}
const sensitiveFields = [
@@ -145,7 +153,7 @@ export class LazyFieldEncryption {
wasPlaintext: false,
wasLegacyEncryption: false,
};
} catch (error) {
} catch {
const legacyFieldName = this.LEGACY_FIELD_NAME_MAP[fieldName];
if (legacyFieldName) {
try {
@@ -166,7 +174,7 @@ export class LazyFieldEncryption {
wasPlaintext: false,
wasLegacyEncryption: true,
};
} catch (legacyError) {}
} catch (error) {}
}
return {
encrypted: fieldValue,
@@ -178,12 +186,12 @@ export class LazyFieldEncryption {
}
static migrateRecordSensitiveFields(
record: any,
record: Record<string, unknown>,
sensitiveFields: string[],
userKEK: Buffer,
recordId: string,
): {
updatedRecord: any;
updatedRecord: Record<string, unknown>;
migratedFields: string[];
needsUpdate: boolean;
} {
@@ -198,7 +206,7 @@ export class LazyFieldEncryption {
try {
const { encrypted, wasPlaintext, wasLegacyEncryption } =
this.migrateFieldToEncrypted(
fieldValue,
fieldValue as string,
userKEK,
recordId,
fieldName,
@@ -253,7 +261,7 @@ export class LazyFieldEncryption {
try {
FieldCrypto.decryptField(fieldValue, userKEK, recordId, fieldName);
return false;
} catch (error) {
} catch {
const legacyFieldName = this.LEGACY_FIELD_NAME_MAP[fieldName];
if (legacyFieldName) {
try {
@@ -264,7 +272,7 @@ export class LazyFieldEncryption {
legacyFieldName,
);
return true;
} catch (legacyError) {
} catch {
return false;
}
}
@@ -275,7 +283,7 @@ export class LazyFieldEncryption {
static async checkUserNeedsMigration(
userId: string,
userKEK: Buffer,
db: any,
db: DatabaseInstance,
): Promise<{
needsMigration: boolean;
plaintextFields: Array<{
@@ -294,7 +302,9 @@ export class LazyFieldEncryption {
try {
const sshHosts = db
.prepare("SELECT * FROM ssh_data WHERE user_id = ?")
.all(userId);
.all(userId) as Array<
Record<string, unknown> & { id: string | number }
>;
for (const host of sshHosts) {
const sensitiveFields = this.getSensitiveFieldsForTable("ssh_data");
const hostPlaintextFields: string[] = [];
@@ -303,7 +313,7 @@ export class LazyFieldEncryption {
if (
host[field] &&
this.fieldNeedsMigration(
host[field],
host[field] as string,
userKEK,
host.id.toString(),
field,
@@ -325,7 +335,9 @@ export class LazyFieldEncryption {
const sshCredentials = db
.prepare("SELECT * FROM ssh_credentials WHERE user_id = ?")
.all(userId);
.all(userId) as Array<
Record<string, unknown> & { id: string | number }
>;
for (const credential of sshCredentials) {
const sensitiveFields =
this.getSensitiveFieldsForTable("ssh_credentials");
@@ -335,7 +347,7 @@ export class LazyFieldEncryption {
if (
credential[field] &&
this.fieldNeedsMigration(
credential[field],
credential[field] as string,
userKEK,
credential.id.toString(),
field,

View File

@@ -11,7 +11,7 @@ export interface LogContext {
sessionId?: string;
requestId?: string;
duration?: number;
[key: string]: any;
[key: string]: unknown;
}
const SENSITIVE_FIELDS = [
@@ -36,7 +36,7 @@ const SENSITIVE_FIELDS = [
const TRUNCATE_FIELDS = ["data", "content", "body", "response", "request"];
class Logger {
export class Logger {
private serviceName: string;
private serviceIcon: string;
private serviceColor: string;
@@ -253,5 +253,6 @@ export const apiLogger = new Logger("API", "🌐", "#3b82f6");
export const authLogger = new Logger("AUTH", "🔐", "#ef4444");
export const systemLogger = new Logger("SYSTEM", "🚀", "#14b8a6");
export const versionLogger = new Logger("VERSION", "📦", "#8b5cf6");
export const dashboardLogger = new Logger("DASHBOARD", "📊", "#ec4899");
export const logger = systemLogger;

Some files were not shown because too many files have changed in this diff Show More