mirror of
https://github.com/Start9Labs/start-os.git
synced 2026-03-26 18:31:52 +00:00
* docs: update preferred external port design in TODO * docs: add user-controlled public/private and port forward mapping to design * docs: overhaul interfaces page design with view/manage split and per-address controls * docs: move address enable/disable to overflow menu, add SSL indicator, defer UI placement decisions * chore: remove tor from startos core Tor is being moved from a built-in OS feature to a service. This removes the Arti-based Tor client, onion address management, hidden service creation, and all related code from the core backend, frontend, and SDK. - Delete core/src/net/tor/ module (~2060 lines) - Remove OnionAddress, TorSecretKey, TorController from all consumers - Remove HostnameInfo::Onion and HostAddress::Onion variants - Remove onion CRUD RPC endpoints and tor subcommand - Remove tor key handling from account and backup/restore - Remove ~12 tor-related Cargo dependencies (arti-client, torut, etc.) - Remove tor UI components, API methods, mock data, and routes - Remove OnionHostname and tor patterns/regexes from SDK - Add v0_4_0_alpha_20 database migration to strip onion data - Bump version to 0.4.0-alpha.20 * chore: flatten HostnameInfo from enum to struct HostnameInfo only had one variant (Ip) after removing Tor. Flatten it into a plain struct with fields gateway, public, hostname. Remove all kind === 'ip' type guards and narrowing across SDK, frontend, and container runtime. Update DB migration to strip the kind field. * chore: format RPCSpec.md markdown table * docs: update TODO.md with DerivedAddressInfo design, remove completed tor task * feat: implement preferred port allocation and per-address enable/disable - Add AvailablePorts::try_alloc() with SSL tracking (BTreeMap<u16, bool>) - Add DerivedAddressInfo on BindInfo with private_disabled/public_enabled/possible sets - Add Bindings wrapper with Map impl for patchdb indexed access - Flatten HostAddress from single-variant enum to struct - Replace set-gateway-enabled RPC with set-address-enabled - Remove hostname_info from Host; computed addresses now in BindInfo.addresses.possible - Compute possible addresses inline in NetServiceData::update() - Update DB migration, SDK types, frontend, and container-runtime * feat: replace InterfaceFilter with ForwardRequirements, add WildcardListener, complete alpha.20 bump - Replace DynInterfaceFilter with ForwardRequirements for per-IP forward precision with source-subnet iptables filtering for private forwards - Add WildcardListener (binds [::]:port) to replace the per-gateway NetworkInterfaceListener/SelfContainedNetworkInterfaceListener/ UpgradableListener infrastructure - Update forward-port script with src_subnet and excluded_src env vars - Remove unused filter types and listener infrastructure from gateway.rs - Add availablePorts migration (IdPool -> BTreeMap<u16, bool>) to alpha.20 - Complete version bump to 0.4.0-alpha.20 in SDK and web * outbound gateway support (#3120) * Multiple (#3111) * fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete * trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed * Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112) * Fix PackageInfoShort to handle LocaleString on releaseNotes * fix: filter by target_version in get_matching_models and pass otherVersions from install * chore: add exver documentation for ai agents * frontend plus some be types --------- Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> * feat: replace SourceFilter with IpNet, add policy routing, remove MASQUERADE * build ts types and fix i18n * fix license display in marketplace * wip refactor * chore: update ts bindings for preferred port design * feat: refactor NetService to watch DB and reconcile network state - NetService sync task now uses PatchDB DbWatch instead of being called directly after DB mutations - Read gateways from DB instead of network interface context when updating host addresses - gateway sync updates all host addresses in the DB - Add Watch<u64> channel for callers to wait on sync completion - Fix ts-rs codegen bug with #[ts(skip)] on flattened Plugin field - Update SDK getServiceInterface.ts for new HostnameInfo shape - Remove unnecessary HTTPS redirect in static_server.rs - Fix tunnel/api.rs to filter for WAN IPv4 address * re-arrange (#3123) * new service interfacee page * feat: add mdns hostname metadata variant and fix vhost routing - Add HostnameMetadata::Mdns variant to distinguish mDNS from private domains - Mark mDNS addresses as private (public: false) since mDNS is local-only - Fall back to null SNI entry when hostname not found in vhost mapping - Simplify public detection in ProxyTarget filter - Pass hostname to update_addresses for mDNS domain name generation * looking good * feat: add port_forwards field to Host for tracking gateway forwarding rules * update bindings for API types, add ARCHITECTURE (#3124) * update binding for API types, add ARCHITECTURE * translations * fix: add CONNMARK restore-mark to mangle OUTPUT chain The CONNMARK --restore-mark rule was only in PREROUTING, which handles forwarded packets. Locally-bound listeners (e.g. vhost) generate replies through the OUTPUT chain, where the fwmark was never restored. This caused response packets to route via the default table instead of back through the originating interface. * chore: reserialize db on equal version, update bindings and docs - Run de/ser roundtrip in pre_init even when db version matches, ensuring all #[serde(default)] fields are populated before any typed access - Add patchdb.md documentation for TypedDbWatch patterns - Update TS bindings for CheckPortParams, CheckPortRes, ifconfigUrl - Update CLAUDE.md docs with patchdb and component-level references * fix: include public gateways for IP-based addresses in vhost targets The server hostname vhost construction only collected private IPs, always setting public to empty. Public IP addresses (Ipv4/Ipv6 metadata with public=true) were never added to the vhost target's public gateway set, causing the vhost filter to reject public traffic for IP-based addresses. * fix: add TLS handshake timeout and fix accept loop deadlock Two issues in TlsListener::poll_accept: 1. No timeout on TLS handshakes: LazyConfigAcceptor waits indefinitely for ClientHello. Attackers that complete TCP handshake but never send TLS data create zombie futures in `in_progress` that never complete. Fix: wrap the entire handshake in tokio::time::timeout(15s). 2. Missing waker on new-connection pending path: when a TCP connection is accepted and the TLS handshake is pending, poll_accept returned Pending without calling wake_by_ref(). Since the TcpListener returned Ready (not Pending), no waker was registered for it. With edge- triggered epoll and no other wakeup source, the task sleeps forever and remaining connections in the kernel accept queue are never drained. Fix: add cx.waker().wake_by_ref() so the task immediately re-polls and continues draining the accept queue. * fix: switch BackgroundJobRunner from Vec to FuturesUnordered BackgroundJobRunner stored active jobs in a Vec<BoxFuture> and polled ALL of them on every wakeup — O(n) per poll. Since this runs in the same tokio::select! as the WebServer accept loop, polling overhead from active connections directly delayed acceptance of new connections. FuturesUnordered only polls woken futures — O(woken) instead of O(n). * chore: update bindings and use typed params for outbound gateway API * feat: per-service and default outbound gateway routing Add set-outbound-gateway RPC for packages and set-default-outbound RPC for the server, with policy routing enforcement via ip rules. Fix connmark restore to skip packets with existing fwmarks, add bridge subnet routes to per-interface tables, and fix squashfs path in update-image-local.sh. * refactor: manifest wraps PackageMetadata, move dependency_metadata to PackageVersionInfo Manifest now embeds PackageMetadata via #[serde(flatten)] instead of duplicating ~14 fields. icon and dependency_metadata moved from PackageMetadata to PackageVersionInfo since they are registry-enrichment data loaded from the S9PK archive. merge_with now returns errors on metadata/icon/dependency_metadata mismatches instead of silently ignoring them. * fix: replace .status() with .invoke() for iptables/ip commands Using .status() leaks stderr directly to system logs, causing noisy iptables error messages. Switch all networking CLI invocations to use .invoke() which captures stderr properly. For check-then-act patterns (iptables -C), use .invoke().await.is_err() instead of .status().await.map_or(false, |s| s.success()). * feat: add check-dns gateway endpoint and fix per-interface routing tables Add a `check-dns` RPC endpoint that verifies whether a gateway's DNS is properly configured for private domain resolution. Uses a three-tier check: direct match (DNS == server IP), TXT challenge probe (DNS on LAN), or failure (DNS off-subnet). Fix per-interface routing tables to clone all non-default routes from the main table instead of only the interface's own subnets. This preserves LAN reachability when the priority-75 catch-all overrides default routing. Filter out status-only flags (linkdown, dead) that are invalid for `ip route add`. * refactor: rename manifest metadata fields and improve error display Rename wrapperRepo→packageRepo, marketingSite→marketingUrl, docsUrl→docsUrls (array), remove supportSite. Add display_src/display_dbg helpers to Error. Fix DepInfo description type to LocaleString. Update web UI, SDK bindings, tests, and fixtures to match. Clean up cli_attach error handling and remove dead commented code. * chore: bump sdk version to 0.4.0-beta.49 * chore: add createTask decoupling TODO * chore: add TODO to clear service error state on install/update * round out dns check, dns server check, port forward check, and gateway port forwards * chore: add TODOs for URL plugins, NAT hairpinning, and start-tunnel OTA updates * version instead of os query param * interface row clickable again, bu now with a chevron! * feat: implement URL plugins with table/row actions and prefill support - Add URL plugin effects (register, export_url, clear_urls) in core - Add PluginHostnameInfo, HostnameMetadata::Plugin, and plugin registration types - Implement plugin URL table in web UI with tableAction button and rowAction overflow menus - Thread urlPluginMetadata (packageId, hostId, interfaceId, internalPort) as prefill to actions - Add prefill support to PackageActionData so metadata passes through form dialogs - Add i18n translations for plugin error messages - Clean up plugin URLs on package uninstall * feat: split row_actions into remove_action and overflow_actions for URL plugins * touch up URL plugins table * show table even when no addresses * feat: NAT hairpinning, DNS static servers, clear service error on install - Add POSTROUTING MASQUERADE rules for container and host hairpin NAT - Allow bridge subnet containers to reach private forwards via LAN IPs - Pass bridge_subnet env var from forward.rs to forward-port script - Use DB-configured static DNS servers in resolver with DB watcher - Fall back to resolv.conf servers when no static servers configured - Clear service error state when install/update completes successfully - Remove completed TODO items * feat: builder-style InputSpec API, prefill plumbing, and port forward fix - Add addKey() and add() builder methods to InputSpec with InputSpecTools - Move OuterType to last generic param on Value, List, and all dynamic methods - Plumb prefill through getActionInput end-to-end (core → container-runtime → SDK) - Filter port_forwards to enabled addresses only - Bump SDK to 0.4.0-beta.50 * fix: propagate host locale into LXC containers and write locale.conf * chore: remove completed URL plugins TODO * feat: OTA updates for start-tunnel via apt repository (untested) - Add apt repo publish script (build/apt/publish-deb.sh) for S3-hosted repo - Add apt source config and GPG key placeholder (apt/) - Add tunnel.update.check and tunnel.update.apply RPC endpoints - Wire up update API in tunnel frontend (api service + mock) - Uses systemd-run --scope to survive service restart during update * fix: publish script dpkg-name, s3cfg fallback, and --reinstall for apply * chore: replace OTA updates TODO with UI TODO for MattDHill * feat: add getOutboundGateway effect and simplify VersionGraph init/uninit Add getOutboundGateway effect across core, container-runtime, and SDK to let services query their effective outbound gateway with callback support. Remove preInstall/uninstall hooks from VersionGraph as they are no longer needed. * frontend start-tunnel updates * chore: remove completed TODO * feat: tor hidden service key migration * chore: migrate from ts-matches to zod across all TypeScript packages * feat(core): allow setting server hostname * send prefill for tasks and hide operations to hidden fields * fix(core): preserve plugin URLs across binding updates BindInfo::update was replacing addresses with a new DerivedAddressInfo that cleared the available set, wiping plugin-exported URLs whenever bind() was called. Also simplify update_addresses plugin preservation to use retain in place rather than collecting into a separate set. * minor cleanup from patch-db audit * clean up prefill flow * frontend support for setting and changing hostname * feat(core): refactor hostname to ServerHostnameInfo with name/hostname pair - Rename Hostname to ServerHostnameInfo, add name + hostname fields - Add set_hostname_rpc for changing hostname at runtime - Migrate alpha_20: generate serverInfo.name from hostname, delete ui.name - Extract gateway.rs helpers to fix rustfmt nesting depth issue - Add i18n key for hostname validation error - Update SDK bindings * add comments to everything potentially consumer facing (#3127) * add comments to everything potentially consumer facing * rework smtp --------- Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> * implement server name * setup changes * clean up copy around addresses table * feat: add zod-deep-partial, partialValidator on InputSpec, and z.deepPartial re-export * fix: header color in zoom (#3128) * fix: merge version ranges when adding existing package signer (#3125) * fix: merge version ranges when adding existing package signer Previously, add_package_signer unconditionally inserted the new version range, overwriting any existing authorization for that signer. Now it OR-merges the new range with the existing one, so running signer add multiple times accumulates permissions rather than replacing them. * add --merge flag to registry package signer add Default behavior remains overwrite. When --merge is passed, the new version range is OR-merged with the existing one, allowing admins to accumulate permissions incrementally. * add missing attribute to TS type * make merge optional * upsert instead of insert * VersionRange::None on upsert * fix: header color in zoom --------- Co-authored-by: Dominion5254 <musashidisciple@proton.me> * update snake and add about this server to system general * chore: bump sdk to beta.53, wrap z.deepPartial with passthrough * reset instead of reset defaults * action failure show dialog * chore: bump sdk to beta.54, add device-info RPC, improve SDK abort handling and InputSpec filtering - Bump SDK version to 0.4.0-beta.54 - Add `server.device-info` RPC endpoint and `s9pk select` CLI command - Extract `HardwareRequirements::is_compatible()` method, reuse in registry filtering - Add `AbortedError` class with `muteUnhandled` flag, replace generic abort errors - Handle unhandled promise rejections in container-runtime with mute support - Improve `InputSpec.filter()` with `keepByDefault` param and boolean filter values - Accept readonly tuples in `CommandType` and `splitCommand` - Remove `sync_host` calls from host API handlers (binding/address changes) - Filter mDNS hostnames by secure gateway availability - Derive mDNS enabled state from LAN IPs in web UI - Add "Open UI" action to address table, disable mDNS toggle - Hide debug details in service error component - Update rpc-toolkit docs for no-params handlers * fix: add --no-nvram to efi grub-install to preserve built-in boot order * update snake * diable actions when in error state * chore: split out nvidia variant * misc bugfixes * create manage-release script (untested) * fix: preserve z namespace types for sdk consumers * sdk version bump * new checkPort types * multiple bugs and better port forward ux * fix link * chore: todos and formatting * fix build --------- Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> Co-authored-by: Matt Hill <mattnine@protonmail.com> Co-authored-by: Alex Inkin <alexander@inkin.ru> Co-authored-by: Dominion5254 <musashidisciple@proton.me>
1014 lines
28 KiB
TypeScript
1014 lines
28 KiB
TypeScript
import * as fs from 'fs/promises'
|
|
import * as T from '../../../base/lib/types'
|
|
import * as cp from 'child_process'
|
|
import { promisify } from 'util'
|
|
import { Buffer } from 'node:buffer'
|
|
import { once } from '../../../base/lib/util/once'
|
|
import { Drop } from '../../../base/lib/util/Drop'
|
|
import { Mounts } from '../mainFn/Mounts'
|
|
import { BackupEffects } from '../backup/Backups'
|
|
import { PathBase } from './Volume'
|
|
|
|
export const execFile = promisify(cp.execFile)
|
|
const False = () => false
|
|
|
|
type ExecResults = {
|
|
exitCode: number | null
|
|
exitSignal: NodeJS.Signals | null
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}
|
|
|
|
export type ExecOptions = {
|
|
input?: string | Buffer
|
|
}
|
|
|
|
const TIMES_TO_WAIT_FOR_PROC = 100
|
|
|
|
async function prepBind(
|
|
from: string | null,
|
|
to: string,
|
|
type: 'file' | 'directory' | 'infer',
|
|
) {
|
|
const fromMeta = from ? await fs.stat(from).catch((_) => null) : null
|
|
const toMeta = await fs.stat(to).catch((_) => null)
|
|
|
|
if (type === 'file' || (type === 'infer' && from && fromMeta?.isFile())) {
|
|
if (toMeta && toMeta.isDirectory()) await fs.rmdir(to, { recursive: false })
|
|
if (from && !fromMeta) {
|
|
await fs.mkdir(from.replace(/\/[^\/]*\/?$/, ''), { recursive: true })
|
|
await fs.writeFile(from, '')
|
|
}
|
|
if (!toMeta) {
|
|
await fs.mkdir(to.replace(/\/[^\/]*\/?$/, ''), { recursive: true })
|
|
await fs.writeFile(to, '')
|
|
}
|
|
} else {
|
|
if (toMeta && toMeta.isFile() && !toMeta.size) await fs.rm(to)
|
|
if (from && !fromMeta) await fs.mkdir(from, { recursive: true })
|
|
if (!toMeta) await fs.mkdir(to, { recursive: true })
|
|
}
|
|
}
|
|
|
|
async function bind(
|
|
from: string,
|
|
to: string,
|
|
type: 'file' | 'directory' | 'infer',
|
|
idmap: IdMap[],
|
|
) {
|
|
await prepBind(from, to, type)
|
|
|
|
const args = ['--bind']
|
|
|
|
if (idmap.length) {
|
|
args.push(
|
|
`-oX-mount.idmap=${idmap.map((i) => `b:${i.fromId}:${i.toId}:${i.range}`).join(' ')}`,
|
|
)
|
|
}
|
|
|
|
await execFile('mount', [...args, from, to])
|
|
}
|
|
|
|
/**
|
|
* Interface representing an isolated container environment for running service processes.
|
|
*
|
|
* Provides methods for executing commands, spawning processes, mounting filesystems,
|
|
* and writing files within the container's rootfs. Comes in two flavors:
|
|
* {@link SubContainerOwned} (owns the underlying filesystem) and
|
|
* {@link SubContainerRc} (reference-counted handle to a shared container).
|
|
*/
|
|
export interface SubContainer<
|
|
Manifest extends T.SDKManifest,
|
|
Effects extends T.Effects = T.Effects,
|
|
> extends Drop,
|
|
PathBase {
|
|
readonly imageId: keyof Manifest['images'] & T.ImageId
|
|
readonly rootfs: string
|
|
readonly guid: T.Guid
|
|
|
|
/**
|
|
* Get the absolute path to a file or directory within this subcontainer's rootfs
|
|
* @param path Path relative to the rootfs
|
|
*/
|
|
subpath(path: string): string
|
|
|
|
/**
|
|
* Apply filesystem mounts (volumes, assets, dependencies, backups) to this subcontainer.
|
|
* @param mounts - The Mounts configuration to apply
|
|
* @returns This subcontainer instance for chaining
|
|
*/
|
|
mount(
|
|
mounts: Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>,
|
|
): Promise<this>
|
|
|
|
/** Destroy this subcontainer and clean up its filesystem */
|
|
destroy: () => Promise<null>
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer
|
|
* DOES NOT THROW ON NONZERO EXIT CODE (see execFail)
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
exec(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs?: number | null,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
throw: () => { stdout: string | Buffer; stderr: string | Buffer }
|
|
exitCode: number | null
|
|
exitSignal: NodeJS.Signals | null
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}>
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer, throwing on non-zero exit status
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
execFail(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs?: number | null,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}>
|
|
|
|
/**
|
|
* Launch a command as the init (PID 1) process of the subcontainer.
|
|
* Replaces the current leader process.
|
|
* @param command - The command and arguments to execute
|
|
* @param options - Optional environment, working directory, and user overrides
|
|
*/
|
|
launch(
|
|
command: string[],
|
|
options?: CommandOptions,
|
|
): Promise<cp.ChildProcessWithoutNullStreams>
|
|
|
|
/**
|
|
* Spawn a command inside the subcontainer as a non-init process.
|
|
* @param command - The command and arguments to execute
|
|
* @param options - Optional environment, working directory, user, and stdio overrides
|
|
*/
|
|
spawn(
|
|
command: string[],
|
|
options?: CommandOptions & StdioOptions,
|
|
): Promise<cp.ChildProcess>
|
|
|
|
/**
|
|
* @description Write a file to the subcontainer's filesystem
|
|
* @param path Path relative to the subcontainer rootfs (e.g. "/etc/config.json")
|
|
* @param data The data to write
|
|
* @param options Optional write options (same as node:fs/promises writeFile)
|
|
*/
|
|
writeFile(
|
|
path: string,
|
|
data:
|
|
| string
|
|
| NodeJS.ArrayBufferView
|
|
| Iterable<string | NodeJS.ArrayBufferView>
|
|
| AsyncIterable<string | NodeJS.ArrayBufferView>,
|
|
options?: Parameters<typeof fs.writeFile>[2],
|
|
): Promise<void>
|
|
|
|
/**
|
|
* Create a reference-counted handle to this subcontainer.
|
|
* The underlying container is only destroyed when all handles are released.
|
|
*/
|
|
rc(): SubContainerRc<Manifest, Effects>
|
|
|
|
/** Returns true if this is an owned subcontainer (not a reference-counted handle) */
|
|
isOwned(): this is SubContainerOwned<Manifest, Effects>
|
|
}
|
|
|
|
/**
|
|
* Want to limit what we can do in a container, so we want to launch a container with a specific image and the mounts.
|
|
*/
|
|
export class SubContainerOwned<
|
|
Manifest extends T.SDKManifest,
|
|
Effects extends T.Effects = T.Effects,
|
|
>
|
|
extends Drop
|
|
implements SubContainer<Manifest, Effects>
|
|
{
|
|
private destroyed = false
|
|
public rcs = 0
|
|
|
|
private leader: cp.ChildProcess
|
|
private leaderExited: boolean = false
|
|
private waitProc: () => Promise<null>
|
|
private constructor(
|
|
readonly effects: Effects,
|
|
readonly imageId: keyof Manifest['images'] & T.ImageId,
|
|
readonly rootfs: string,
|
|
readonly guid: T.Guid,
|
|
) {
|
|
super()
|
|
this.leaderExited = false
|
|
this.leader = cp.spawn(
|
|
'start-container',
|
|
['subcontainer', 'launch', rootfs],
|
|
{
|
|
killSignal: 'SIGKILL',
|
|
stdio: 'inherit',
|
|
},
|
|
)
|
|
this.leader.on('exit', () => {
|
|
this.leaderExited = true
|
|
})
|
|
this.waitProc = once(
|
|
() =>
|
|
new Promise(async (resolve, reject) => {
|
|
let count = 0
|
|
while (
|
|
!(await fs.stat(`${this.rootfs}/proc/1`).then((x) => !!x, False))
|
|
) {
|
|
if (count++ > TIMES_TO_WAIT_FOR_PROC) {
|
|
console.debug('Failed to start subcontainer', {
|
|
guid: this.guid,
|
|
imageId: this.imageId,
|
|
rootfs: this.rootfs,
|
|
})
|
|
return reject(
|
|
new Error(`Failed to start subcontainer ${this.imageId}`),
|
|
)
|
|
}
|
|
await wait(1)
|
|
}
|
|
resolve(null)
|
|
}),
|
|
)
|
|
}
|
|
static async of<Manifest extends T.SDKManifest, Effects extends T.Effects>(
|
|
effects: Effects,
|
|
image: {
|
|
imageId: keyof Manifest['images'] & T.ImageId
|
|
sharedRun?: boolean
|
|
},
|
|
mounts:
|
|
| (Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>)
|
|
| null,
|
|
name: string,
|
|
): Promise<SubContainerOwned<Manifest, Effects>> {
|
|
const { imageId, sharedRun } = image
|
|
const [rootfs, guid] = await effects.subcontainer.createFs({
|
|
imageId,
|
|
name,
|
|
})
|
|
|
|
const res = new SubContainerOwned(effects, imageId, rootfs, guid)
|
|
|
|
try {
|
|
if (mounts) {
|
|
await res.mount(mounts)
|
|
}
|
|
const shared = ['dev', 'sys']
|
|
if (!!sharedRun) {
|
|
shared.push('run')
|
|
}
|
|
|
|
await fs.mkdir(`${rootfs}/etc`, { recursive: true })
|
|
await fs.copyFile('/etc/resolv.conf', `${rootfs}/etc/resolv.conf`)
|
|
|
|
for (const dirPart of shared) {
|
|
const from = `/${dirPart}`
|
|
const to = `${rootfs}/${dirPart}`
|
|
await fs.mkdir(from, { recursive: true })
|
|
await fs.mkdir(to, { recursive: true })
|
|
await execFile('mount', ['--rbind', from, to])
|
|
}
|
|
|
|
return res
|
|
} catch (e) {
|
|
await res.destroy()
|
|
throw e
|
|
}
|
|
}
|
|
|
|
static async withTemp<
|
|
Manifest extends T.SDKManifest,
|
|
T,
|
|
Effects extends T.Effects,
|
|
>(
|
|
effects: Effects,
|
|
image: {
|
|
imageId: keyof Manifest['images'] & T.ImageId
|
|
sharedRun?: boolean
|
|
},
|
|
mounts:
|
|
| (Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>)
|
|
| null,
|
|
name: string,
|
|
fn: (subContainer: SubContainer<Manifest, Effects>) => Promise<T>,
|
|
): Promise<T> {
|
|
const subContainer = await SubContainerOwned.of(
|
|
effects,
|
|
image,
|
|
mounts,
|
|
name,
|
|
)
|
|
try {
|
|
return await fn(subContainer)
|
|
} finally {
|
|
await subContainer.destroy()
|
|
}
|
|
}
|
|
|
|
subpath(path: string): string {
|
|
return path.startsWith('/')
|
|
? `${this.rootfs}${path}`
|
|
: `${this.rootfs}/${path}`
|
|
}
|
|
|
|
async mount(
|
|
mounts: Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>,
|
|
): Promise<this> {
|
|
for (let mount of mounts.build()) {
|
|
let { options, mountpoint } = mount
|
|
const path = mountpoint.startsWith('/')
|
|
? `${this.rootfs}${mountpoint}`
|
|
: `${this.rootfs}/${mountpoint}`
|
|
if (options.type === 'volume') {
|
|
const subpath = options.subpath
|
|
? options.subpath.startsWith('/')
|
|
? options.subpath
|
|
: `/${options.subpath}`
|
|
: '/'
|
|
const from = `/media/startos/volumes/${options.volumeId}${subpath}`
|
|
|
|
await bind(from, path, options.filetype, options.idmap)
|
|
} else if (options.type === 'assets') {
|
|
const subpath = options.subpath
|
|
? options.subpath.startsWith('/')
|
|
? options.subpath
|
|
: `/${options.subpath}`
|
|
: '/'
|
|
const from = `/media/startos/assets/${subpath}`
|
|
|
|
await bind(from, path, options.filetype, options.idmap)
|
|
} else if (options.type === 'pointer') {
|
|
await prepBind(null, path, 'directory')
|
|
await this.effects.mount({ location: path, target: options })
|
|
} else if (options.type === 'backup') {
|
|
const subpath = options.subpath
|
|
? options.subpath.startsWith('/')
|
|
? options.subpath
|
|
: `/${options.subpath}`
|
|
: '/'
|
|
const from = `/media/startos/backup${subpath}`
|
|
|
|
await bind(from, path, options.filetype, options.idmap)
|
|
} else {
|
|
throw new Error(`unknown type ${(options as any).type}`)
|
|
}
|
|
}
|
|
return this
|
|
}
|
|
|
|
private async killLeader() {
|
|
if (this.leaderExited) {
|
|
return
|
|
}
|
|
return new Promise<null>((resolve, reject) => {
|
|
try {
|
|
let timeout = setTimeout(() => this.leader.kill('SIGKILL'), 30000)
|
|
this.leader.on('exit', () => {
|
|
clearTimeout(timeout)
|
|
resolve(null)
|
|
})
|
|
if (!this.leader.kill('SIGTERM')) {
|
|
reject(new Error('kill(2) failed'))
|
|
}
|
|
} catch (e) {
|
|
reject(e)
|
|
}
|
|
})
|
|
}
|
|
|
|
get destroy() {
|
|
return async () => {
|
|
if (!this.destroyed) {
|
|
const guid = this.guid
|
|
await this.killLeader()
|
|
await this.effects.subcontainer.destroyFs({ guid })
|
|
this.destroyed = true
|
|
}
|
|
return null
|
|
}
|
|
}
|
|
|
|
onDrop(): void {
|
|
console.log(`Cleaning up dangling subcontainer ${this.guid}`)
|
|
this.destroy()
|
|
}
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer
|
|
* DOES NOT THROW ON NONZERO EXIT CODE (see execFail)
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
async exec(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs: number | null = 30000,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
throw: () => { stdout: string | Buffer; stderr: string | Buffer }
|
|
exitCode: number | null
|
|
exitSignal: NodeJS.Signals | null
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}> {
|
|
await this.waitProc()
|
|
const imageMeta: T.ImageMetadata = await fs
|
|
.readFile(`/media/startos/images/${this.imageId}.json`, {
|
|
encoding: 'utf8',
|
|
})
|
|
.catch(() => '{}')
|
|
.then(JSON.parse)
|
|
let extra: string[] = []
|
|
let user = imageMeta.user || 'root'
|
|
if (options?.user) {
|
|
user = options.user
|
|
delete options.user
|
|
}
|
|
let workdir = imageMeta.workdir || '/'
|
|
if (options?.cwd) {
|
|
workdir = options.cwd
|
|
delete options.cwd
|
|
}
|
|
if (options?.env) {
|
|
for (let [k, v] of Object.entries(options.env)) {
|
|
extra.push(`--env=${k}=${v}`)
|
|
}
|
|
}
|
|
const child = cp.spawn(
|
|
'start-container',
|
|
[
|
|
'subcontainer',
|
|
'exec',
|
|
`--env-file=/media/startos/images/${this.imageId}.env`,
|
|
`--user=${user}`,
|
|
`--workdir=${workdir}`,
|
|
...extra,
|
|
this.rootfs,
|
|
...command,
|
|
],
|
|
options || {},
|
|
)
|
|
abort?.signal.addEventListener('abort', () => child.kill('SIGKILL'))
|
|
if (options?.input) {
|
|
await new Promise<null>((resolve, reject) => {
|
|
try {
|
|
child.stdin.on('error', (e) => reject(e))
|
|
child.stdin.write(options.input, (e) => {
|
|
if (e) {
|
|
reject(e)
|
|
} else {
|
|
resolve(null)
|
|
}
|
|
})
|
|
} catch (e) {
|
|
reject(e)
|
|
}
|
|
})
|
|
await new Promise<null>((resolve, reject) => {
|
|
try {
|
|
child.stdin.end(resolve)
|
|
} catch (e) {
|
|
reject(e)
|
|
}
|
|
})
|
|
}
|
|
const stdout = { data: '' as string }
|
|
const stderr = { data: '' as string }
|
|
const appendData =
|
|
(appendTo: { data: string }) => (chunk: string | Buffer | any) => {
|
|
if (typeof chunk === 'string' || chunk instanceof Buffer) {
|
|
appendTo.data += chunk.toString()
|
|
} else {
|
|
console.error('received unexpected chunk', chunk)
|
|
}
|
|
}
|
|
return new Promise((resolve, reject) => {
|
|
child.on('error', reject)
|
|
let killTimeout: NodeJS.Timeout | undefined
|
|
if (timeoutMs !== null && child.pid) {
|
|
killTimeout = setTimeout(() => child.kill('SIGKILL'), timeoutMs)
|
|
}
|
|
child.stdout.on('data', appendData(stdout))
|
|
child.stderr.on('data', appendData(stderr))
|
|
child.on('exit', (code, signal) => {
|
|
clearTimeout(killTimeout)
|
|
const result = {
|
|
exitCode: code,
|
|
exitSignal: signal,
|
|
stdout: stdout.data,
|
|
stderr: stderr.data,
|
|
}
|
|
resolve({
|
|
throw: () =>
|
|
!code && !signal
|
|
? { stdout: stdout.data, stderr: stderr.data }
|
|
: (() => {
|
|
throw new ExitError(command[0], result)
|
|
})(),
|
|
...result,
|
|
})
|
|
})
|
|
})
|
|
}
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer, throwing on non-zero exit status
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
async execFail(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs?: number | null,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}> {
|
|
return this.exec(command, options, timeoutMs, abort).then((res) =>
|
|
res.throw(),
|
|
)
|
|
}
|
|
|
|
async launch(
|
|
command: string[],
|
|
options?: CommandOptions,
|
|
): Promise<cp.ChildProcessWithoutNullStreams> {
|
|
await this.waitProc()
|
|
const imageMeta: T.ImageMetadata = await fs
|
|
.readFile(`/media/startos/images/${this.imageId}.json`, {
|
|
encoding: 'utf8',
|
|
})
|
|
.catch(() => '{}')
|
|
.then(JSON.parse)
|
|
let extra: string[] = []
|
|
let user = imageMeta.user || 'root'
|
|
if (options?.user) {
|
|
user = options.user
|
|
delete options.user
|
|
}
|
|
let workdir = imageMeta.workdir || '/'
|
|
if (options?.cwd) {
|
|
workdir = options.cwd
|
|
delete options.cwd
|
|
}
|
|
if (options?.env) {
|
|
for (let [k, v] of Object.entries(options.env).filter(
|
|
([_, v]) => v != undefined,
|
|
)) {
|
|
extra.push(`--env=${k}=${v}`)
|
|
}
|
|
}
|
|
await this.killLeader()
|
|
this.leaderExited = false
|
|
this.leader = cp.spawn(
|
|
'start-container',
|
|
[
|
|
'subcontainer',
|
|
'launch',
|
|
`--env-file=/media/startos/images/${this.imageId}.env`,
|
|
`--user=${user}`,
|
|
`--workdir=${workdir}`,
|
|
...extra,
|
|
this.rootfs,
|
|
...command,
|
|
],
|
|
{ ...options, stdio: 'inherit' },
|
|
)
|
|
this.leader.on('exit', () => {
|
|
this.leaderExited = true
|
|
})
|
|
return this.leader as cp.ChildProcessWithoutNullStreams
|
|
}
|
|
|
|
async spawn(
|
|
command: string[],
|
|
options: CommandOptions & StdioOptions = { stdio: 'inherit' },
|
|
): Promise<cp.ChildProcess> {
|
|
await this.waitProc()
|
|
const imageMeta: T.ImageMetadata = await fs
|
|
.readFile(`/media/startos/images/${this.imageId}.json`, {
|
|
encoding: 'utf8',
|
|
})
|
|
.catch(() => '{}')
|
|
.then(JSON.parse)
|
|
let extra: string[] = []
|
|
let user = imageMeta.user || 'root'
|
|
if (options?.user) {
|
|
user = options.user
|
|
delete options.user
|
|
}
|
|
let workdir = imageMeta.workdir || '/'
|
|
if (options.cwd) {
|
|
workdir = options.cwd
|
|
delete options.cwd
|
|
}
|
|
if (options?.env) {
|
|
for (let [k, v] of Object.entries(options.env).filter(
|
|
([_, v]) => v != undefined,
|
|
)) {
|
|
extra.push(`--env=${k}=${v}`)
|
|
}
|
|
}
|
|
return cp.spawn(
|
|
'start-container',
|
|
[
|
|
'subcontainer',
|
|
'exec',
|
|
`--env-file=/media/startos/images/${this.imageId}.env`,
|
|
`--user=${user}`,
|
|
`--workdir=${workdir}`,
|
|
...extra,
|
|
this.rootfs,
|
|
...command,
|
|
],
|
|
options,
|
|
)
|
|
}
|
|
|
|
/**
|
|
* @description Write a file to the subcontainer's filesystem
|
|
* @param path Path relative to the subcontainer rootfs (e.g. "/etc/config.json")
|
|
* @param data The data to write
|
|
* @param options Optional write options (same as node:fs/promises writeFile)
|
|
*/
|
|
async writeFile(
|
|
path: string,
|
|
data:
|
|
| string
|
|
| NodeJS.ArrayBufferView
|
|
| Iterable<string | NodeJS.ArrayBufferView>
|
|
| AsyncIterable<string | NodeJS.ArrayBufferView>,
|
|
options?: Parameters<typeof fs.writeFile>[2],
|
|
): Promise<void> {
|
|
const fullPath = this.subpath(path)
|
|
const dir = fullPath.replace(/\/[^/]*\/?$/, '')
|
|
await fs.mkdir(dir, { recursive: true })
|
|
return fs.writeFile(fullPath, data, options)
|
|
}
|
|
|
|
rc(): SubContainerRc<Manifest, Effects> {
|
|
return new SubContainerRc(this)
|
|
}
|
|
|
|
isOwned(): this is SubContainerOwned<Manifest, Effects> {
|
|
return true
|
|
}
|
|
}
|
|
|
|
/**
|
|
* A reference-counted handle to a {@link SubContainerOwned}.
|
|
*
|
|
* Multiple `SubContainerRc` instances can share one underlying subcontainer.
|
|
* The subcontainer is destroyed only when the last reference is released via `destroy()`.
|
|
*/
|
|
export class SubContainerRc<
|
|
Manifest extends T.SDKManifest,
|
|
Effects extends T.Effects = T.Effects,
|
|
>
|
|
extends Drop
|
|
implements SubContainer<Manifest, Effects>
|
|
{
|
|
get imageId() {
|
|
return this.subcontainer.imageId
|
|
}
|
|
get rootfs() {
|
|
return this.subcontainer.rootfs
|
|
}
|
|
get guid() {
|
|
return this.subcontainer.guid
|
|
}
|
|
subpath(path: string): string {
|
|
return this.subcontainer.subpath(path)
|
|
}
|
|
private destroyed = false
|
|
private destroying: Promise<null> | null = null
|
|
public constructor(
|
|
private readonly subcontainer: SubContainerOwned<Manifest, Effects>,
|
|
) {
|
|
subcontainer.rcs++
|
|
super()
|
|
}
|
|
static async of<Manifest extends T.SDKManifest, Effects extends T.Effects>(
|
|
effects: Effects,
|
|
image: {
|
|
imageId: keyof Manifest['images'] & T.ImageId
|
|
sharedRun?: boolean
|
|
},
|
|
mounts:
|
|
| (Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>)
|
|
| null,
|
|
name: string,
|
|
) {
|
|
return new SubContainerRc(
|
|
await SubContainerOwned.of(effects, image, mounts, name),
|
|
)
|
|
}
|
|
|
|
static async withTemp<
|
|
Manifest extends T.SDKManifest,
|
|
T,
|
|
Effects extends T.Effects,
|
|
>(
|
|
effects: Effects,
|
|
image: {
|
|
imageId: keyof Manifest['images'] & T.ImageId
|
|
sharedRun?: boolean
|
|
},
|
|
mounts:
|
|
| (Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>)
|
|
| null,
|
|
name: string,
|
|
fn: (subContainer: SubContainer<Manifest, Effects>) => Promise<T>,
|
|
): Promise<T> {
|
|
const subContainer = await SubContainerRc.of(effects, image, mounts, name)
|
|
try {
|
|
return await fn(subContainer)
|
|
} finally {
|
|
await subContainer.destroy()
|
|
}
|
|
}
|
|
|
|
async mount(
|
|
mounts: Effects extends BackupEffects
|
|
? Mounts<
|
|
Manifest,
|
|
{
|
|
subpath: string | null
|
|
mountpoint: string
|
|
}
|
|
>
|
|
: Mounts<Manifest, never>,
|
|
): Promise<this> {
|
|
await this.subcontainer.mount(mounts)
|
|
return this
|
|
}
|
|
|
|
get destroy() {
|
|
return async () => {
|
|
if (!this.destroyed && !this.destroying) {
|
|
const rcs = --this.subcontainer.rcs
|
|
if (rcs <= 0) {
|
|
this.destroying = this.subcontainer.destroy()
|
|
if (rcs < 0) console.error(new Error('UNREACHABLE: rcs < 0').stack)
|
|
}
|
|
}
|
|
if (this.destroying) {
|
|
await this.destroying
|
|
}
|
|
this.destroyed = true
|
|
this.destroying = null
|
|
return null
|
|
}
|
|
}
|
|
|
|
onDrop(): void {
|
|
this.destroy()
|
|
}
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer
|
|
* DOES NOT THROW ON NONZERO EXIT CODE (see execFail)
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
async exec(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs?: number | null,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
throw: () => { stdout: string | Buffer; stderr: string | Buffer }
|
|
exitCode: number | null
|
|
exitSignal: NodeJS.Signals | null
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}> {
|
|
return this.subcontainer.exec(command, options, timeoutMs, abort)
|
|
}
|
|
|
|
/**
|
|
* @description run a command inside this subcontainer, throwing on non-zero exit status
|
|
* @param commands an array representing the command and args to execute
|
|
* @param options
|
|
* @param timeoutMs how long to wait before killing the command in ms
|
|
* @returns
|
|
*/
|
|
async execFail(
|
|
command: string[],
|
|
options?: CommandOptions & ExecOptions,
|
|
timeoutMs?: number | null,
|
|
abort?: AbortController,
|
|
): Promise<{
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
}> {
|
|
return this.subcontainer.execFail(command, options, timeoutMs, abort)
|
|
}
|
|
|
|
async launch(
|
|
command: string[],
|
|
options?: CommandOptions,
|
|
): Promise<cp.ChildProcessWithoutNullStreams> {
|
|
return this.subcontainer.launch(command, options)
|
|
}
|
|
|
|
async spawn(
|
|
command: string[],
|
|
options: CommandOptions & StdioOptions = { stdio: 'inherit' },
|
|
): Promise<cp.ChildProcess> {
|
|
return this.subcontainer.spawn(command, options)
|
|
}
|
|
|
|
/**
|
|
* @description Write a file to the subcontainer's filesystem
|
|
* @param path Path relative to the subcontainer rootfs (e.g. "/etc/config.json")
|
|
* @param data The data to write
|
|
* @param options Optional write options (same as node:fs/promises writeFile)
|
|
*/
|
|
async writeFile(
|
|
path: string,
|
|
data:
|
|
| string
|
|
| NodeJS.ArrayBufferView
|
|
| Iterable<string | NodeJS.ArrayBufferView>
|
|
| AsyncIterable<string | NodeJS.ArrayBufferView>,
|
|
options?: Parameters<typeof fs.writeFile>[2],
|
|
): Promise<void> {
|
|
return this.subcontainer.writeFile(path, data, options)
|
|
}
|
|
|
|
rc(): SubContainerRc<Manifest, Effects> {
|
|
return this.subcontainer.rc()
|
|
}
|
|
|
|
isOwned(): this is SubContainerOwned<Manifest, Effects> {
|
|
return false
|
|
}
|
|
}
|
|
|
|
export type CommandOptions = {
|
|
/**
|
|
* Environment variables to set for this command
|
|
*/
|
|
env?: { [variable in string]?: string }
|
|
/**
|
|
* the working directory to run this command in
|
|
*/
|
|
cwd?: string
|
|
/**
|
|
* the user to run this command as
|
|
*/
|
|
user?: string
|
|
}
|
|
|
|
export type StdioOptions = {
|
|
stdio?: cp.IOType
|
|
}
|
|
|
|
/** UID/GID mapping for mount id-remapping (see kernel idmappings docs) */
|
|
export type IdMap = { fromId: number; toId: number; range: number }
|
|
|
|
/** Union of all mount option types supported by the subcontainer runtime */
|
|
export type MountOptions =
|
|
| MountOptionsVolume
|
|
| MountOptionsAssets
|
|
| MountOptionsPointer
|
|
| MountOptionsBackup
|
|
|
|
/** Mount options for binding a service volume into a subcontainer */
|
|
export type MountOptionsVolume = {
|
|
type: 'volume'
|
|
volumeId: string
|
|
subpath: string | null
|
|
readonly: boolean
|
|
filetype: 'file' | 'directory' | 'infer'
|
|
idmap: IdMap[]
|
|
}
|
|
|
|
/** Mount options for binding packaged static assets into a subcontainer */
|
|
export type MountOptionsAssets = {
|
|
type: 'assets'
|
|
subpath: string | null
|
|
filetype: 'file' | 'directory' | 'infer'
|
|
idmap: { fromId: number; toId: number; range: number }[]
|
|
}
|
|
|
|
/** Mount options for binding a dependency package's volume into a subcontainer */
|
|
export type MountOptionsPointer = {
|
|
type: 'pointer'
|
|
packageId: string
|
|
volumeId: string
|
|
subpath: string | null
|
|
readonly: boolean
|
|
idmap: { fromId: number; toId: number; range: number }[]
|
|
}
|
|
|
|
/** Mount options for binding the backup directory into a subcontainer */
|
|
export type MountOptionsBackup = {
|
|
type: 'backup'
|
|
subpath: string | null
|
|
filetype: 'file' | 'directory' | 'infer'
|
|
idmap: { fromId: number; toId: number; range: number }[]
|
|
}
|
|
function wait(time: number) {
|
|
return new Promise((resolve) => setTimeout(resolve, time))
|
|
}
|
|
|
|
/**
|
|
* Error thrown when a subcontainer command exits with a non-zero code or signal.
|
|
* Contains the full result including stdout, stderr, exit code, and exit signal.
|
|
*/
|
|
export class ExitError extends Error {
|
|
constructor(
|
|
readonly command: string,
|
|
readonly result: {
|
|
exitCode: number | null
|
|
exitSignal: T.Signals | null
|
|
stdout: string | Buffer
|
|
stderr: string | Buffer
|
|
},
|
|
) {
|
|
let message: string
|
|
if (result.exitCode) {
|
|
message = `${command} failed with exit code ${result.exitCode}: ${result.stderr}`
|
|
} else if (result.exitSignal) {
|
|
message = `${command} terminated with signal ${result.exitSignal}: ${result.stderr}`
|
|
} else {
|
|
message = `${command} succeeded: ${result.stdout}`
|
|
}
|
|
super(message)
|
|
}
|
|
}
|