Feature/lxc container runtime (#2514)

* wip: static-server errors

* wip: fix wifi

* wip: Fix the service_effects

* wip: Fix cors in the middleware

* wip(chore): Auth clean up the lint.

* wip(fix): Vhost

* wip: continue manager refactor

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: service manager refactor

* wip: Some fixes

* wip(fix): Fix the lib.rs

* wip

* wip(fix): Logs

* wip: bins

* wip(innspect): Add in the inspect

* wip: config

* wip(fix): Diagnostic

* wip(fix): Dependencies

* wip: context

* wip(fix) Sorta auth

* wip: warnings

* wip(fix): registry/admin

* wip(fix) marketplace

* wip(fix) Some more converted and fixed with the linter and config

* wip: Working on the static server

* wip(fix)static server

* wip: Remove some asynnc

* wip: Something about the request and regular rpc

* wip: gut install

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: Convert the static server into the new system

* wip delete file

* test

* wip(fix) vhost does not need the with safe defaults

* wip: Adding in the wifi

* wip: Fix the developer and the verify

* wip: new install flow

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* fix middleware

* wip

* wip: Fix the auth

* wip

* continue service refactor

* feature: Service get_config

* feat: Action

* wip: Fighting the great fight against the borrow checker

* wip: Remove an error in a file that I just need to deel with later

* chore: Add in some more lifetime stuff to the services

* wip: Install fix on lifetime

* cleanup

* wip: Deal with the borrow later

* more cleanup

* resolve borrowchecker errors

* wip(feat): add in the handler for the socket, for now

* wip(feat): Update the service_effect_handler::action

* chore: Add in the changes to make sure the from_service goes to context

* chore: Change the

* refactor service map

* fix references to service map

* fill out restore

* wip: Before I work on the store stuff

* fix backup module

* handle some warnings

* feat: add in the ui components on the rust side

* feature: Update the procedures

* chore: Update the js side of the main and a few of the others

* chore: Update the rpc listener to match the persistant container

* wip: Working on updating some things to have a better name

* wip(feat): Try and get the rpc to return the correct shape?

* lxc wip

* wip(feat): Try and get the rpc to return the correct shape?

* build for container runtime wip

* remove container-init

* fix build

* fix error

* chore: Update to work I suppose

* lxc wip

* remove docker module and feature

* download alpine squashfs automatically

* overlays effect

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* chore: Add the overlay effect

* feat: Add the mounter in the main

* chore: Convert to use the mounts, still need to work with the sandbox

* install fixes

* fix ssl

* fixes from testing

* implement tmpfile for upload

* wip

* misc fixes

* cleanup

* cleanup

* better progress reporting

* progress for sideload

* return real guid

* add devmode script

* fix lxc rootfs path

* fix percentage bar

* fix progress bar styling

* fix build for unstable

* tweaks

* label progress

* tweaks

* update progress more often

* make symlink in rpc_client

* make socket dir

* fix parent path

* add start-cli to container

* add echo and gitInfo commands

* wip: Add the init + errors

* chore: Add in the exit effect for the system

* chore: Change the type to null for failure to parse

* move sigterm timeout to stopping status

* update order

* chore: Update the return type

* remove dbg

* change the map error

* chore: Update the thing to capture id

* chore add some life changes

* chore: Update the loging

* chore: Update the package to run module

* us From for RpcError

* chore: Update to use import instead

* chore: update

* chore: Use require for the backup

* fix a default

* update the type that is wrong

* chore: Update the type of the manifest

* chore: Update to make null

* only symlink if not exists

* get rid of double result

* better debug info for ErrorCollection

* chore: Update effects

* chore: fix

* mount assets and volumes

* add exec instead of spawn

* fix mounting in image

* fix overlay mounts

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* misc fixes

* feat: Fix two

* fix: systemForEmbassy main

* chore: Fix small part of main loop

* chore: Modify the bundle

* merge

* fixMain loop"

* move tsc to makefile

* chore: Update the return types of the health check

* fix client

* chore: Convert the todo to use tsmatches

* add in the fixes for the seen and create the hack to allow demo

* chore: Update to include the systemForStartOs

* chore UPdate to the latest types from the expected outout

* fixes

* fix typo

* Don't emit if failure on tsc

* wip

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* add s9pk api

* add inspection

* add inspect manifest

* newline after display serializable

* fix squashfs in image name

* edit manifest

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* wait for response on repl

* ignore sig for now

* ignore sig for now

* re-enable sig verification

* fix

* wip

* env and chroot

* add profiling logs

* set uid & gid in squashfs to 100000

* set uid of sqfs to 100000

* fix mksquashfs args

* add env to compat

* fix

* re-add docker feature flag

* fix docker output format being stupid

* here be dragons

* chore: Add in the cross compiling for something

* fix npm link

* extract logs from container on exit

* chore: Update for testing

* add log capture to drop trait

* chore: add in the modifications that I make

* chore: Update small things for no updates

* chore: Update the types of something

* chore: Make main not complain

* idmapped mounts

* idmapped volumes

* re-enable kiosk

* chore: Add in some logging for the new system

* bring in start-sdk

* remove avahi

* chore: Update the deps

* switch to musl

* chore: Update the version of prettier

* chore: Organize'

* chore: Update some of the headers back to the standard of fetch

* fix musl build

* fix idmapped mounts

* fix cross build

* use cross compiler for correct arch

* feat: Add in the faked ssl stuff for the effects

* @dr_bonez Did a solution here

* chore: Something that DrBonez

* chore: up

* wip: We have a working server!!!

* wip

* uninstall

* wip

* tes

---------

Co-authored-by: J H <dragondef@gmail.com>
Co-authored-by: J H <Blu-J@users.noreply.github.com>
Co-authored-by: J H <2364004+Blu-J@users.noreply.github.com>
This commit is contained in:
Aiden McClelland
2024-02-17 11:14:14 -07:00
committed by GitHub
parent 65009e2f69
commit fab13db4b4
326 changed files with 31708 additions and 13987 deletions

View File

@@ -0,0 +1,192 @@
use std::any::Any;
use std::future::ready;
use std::time::Duration;
use futures::future::BoxFuture;
use futures::{Future, FutureExt, TryFutureExt};
use helpers::NonDetachingJoinHandle;
use tokio::sync::oneshot::error::TryRecvError;
use tokio::sync::{mpsc, oneshot};
use crate::prelude::*;
use crate::util::Never;
pub trait Actor: Send + 'static {
#[allow(unused_variables)]
fn init(&mut self, jobs: &mut BackgroundJobs) {}
}
#[async_trait::async_trait]
pub trait Handler<M>: Actor {
type Response: Any + Send;
async fn handle(&mut self, msg: M, jobs: &mut BackgroundJobs) -> Self::Response;
}
#[async_trait::async_trait]
trait Message<A>: Send {
async fn handle_with(
self: Box<Self>,
actor: &mut A,
jobs: &mut BackgroundJobs,
) -> Box<dyn Any + Send>;
}
#[async_trait::async_trait]
impl<M: Send, A: Actor> Message<A> for M
where
A: Handler<M>,
{
async fn handle_with(
self: Box<Self>,
actor: &mut A,
jobs: &mut BackgroundJobs,
) -> Box<dyn Any + Send> {
Box::new(actor.handle(*self, jobs).await)
}
}
type Request<A> = (Box<dyn Message<A>>, oneshot::Sender<Box<dyn Any + Send>>);
#[derive(Default)]
pub struct BackgroundJobs {
jobs: Vec<BoxFuture<'static, ()>>,
}
impl BackgroundJobs {
pub fn add_job(&mut self, fut: impl Future<Output = ()> + Send + 'static) {
self.jobs.push(fut.boxed());
}
}
impl Future for BackgroundJobs {
type Output = Never;
fn poll(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Self::Output> {
let complete = self
.jobs
.iter_mut()
.enumerate()
.filter_map(|(i, f)| match f.poll_unpin(cx) {
std::task::Poll::Pending => None,
std::task::Poll::Ready(_) => Some(i),
})
.collect::<Vec<_>>();
for idx in complete.into_iter().rev() {
#[allow(clippy::let_underscore_future)]
let _ = self.jobs.swap_remove(idx);
}
std::task::Poll::Pending
}
}
pub struct SimpleActor<A: Actor> {
shutdown: oneshot::Sender<()>,
runtime: NonDetachingJoinHandle<()>,
messenger: mpsc::UnboundedSender<Request<A>>,
}
impl<A: Actor> SimpleActor<A> {
pub fn new(mut actor: A) -> Self {
let (shutdown_send, mut shutdown_recv) = oneshot::channel();
let (messenger_send, mut messenger_recv) = mpsc::unbounded_channel::<Request<A>>();
let runtime = NonDetachingJoinHandle::from(tokio::spawn(async move {
let mut bg = BackgroundJobs::default();
actor.init(&mut bg);
loop {
tokio::select! {
_ = &mut bg => (),
msg = messenger_recv.recv() => match msg {
Some((msg, reply)) if shutdown_recv.try_recv() == Err(TryRecvError::Empty) => {
let mut new_bg = BackgroundJobs::default();
tokio::select! {
res = msg.handle_with(&mut actor, &mut new_bg) => { reply.send(res); },
_ = &mut bg => (),
}
bg.jobs.append(&mut new_bg.jobs);
}
_ => break,
},
}
}
}));
Self {
shutdown: shutdown_send,
runtime,
messenger: messenger_send,
}
}
/// Message is guaranteed to be queued immediately
pub fn queue<M: Send + 'static>(
&self,
message: M,
) -> impl Future<Output = Result<A::Response, Error>>
where
A: Handler<M>,
{
if self.runtime.is_finished() {
return futures::future::Either::Left(ready(Err(Error::new(
eyre!("actor runtime has exited"),
ErrorKind::Unknown,
))));
}
let (reply_send, reply_recv) = oneshot::channel();
self.messenger.send((Box::new(message), reply_send));
futures::future::Either::Right(
reply_recv
.map_err(|_| Error::new(eyre!("actor runtime has exited"), ErrorKind::Unknown))
.and_then(|a| {
ready(
a.downcast()
.map_err(|_| {
Error::new(
eyre!("received incorrect type in response"),
ErrorKind::Incoherent,
)
})
.map(|a| *a),
)
}),
)
}
pub async fn send<M: Send + 'static>(&self, message: M) -> Result<A::Response, Error>
where
A: Handler<M>,
{
self.queue(message).await
}
pub async fn shutdown(self, strategy: PendingMessageStrategy) {
drop(self.messenger);
let timeout = match strategy {
PendingMessageStrategy::CancelAll => {
self.shutdown.send(());
Some(Duration::from_secs(0))
}
PendingMessageStrategy::FinishCurrentCancelPending { timeout } => {
self.shutdown.send(());
timeout
}
PendingMessageStrategy::FinishAll { timeout } => timeout,
};
let aborter = if let Some(timeout) = timeout {
let hdl = self.runtime.abort_handle();
async move {
tokio::time::sleep(timeout).await;
hdl.abort();
}
.boxed()
} else {
futures::future::pending().boxed()
};
tokio::select! {
_ = aborter => (),
_ = self.runtime => (),
}
}
}
pub enum PendingMessageStrategy {
CancelAll,
FinishCurrentCancelPending { timeout: Option<Duration> },
FinishAll { timeout: Option<Duration> },
}

View File

@@ -0,0 +1,36 @@
use std::marker::PhantomData;
use std::str::FromStr;
use clap::builder::TypedValueParser;
use crate::prelude::*;
pub struct FromStrParser<T>(PhantomData<T>);
impl<T> FromStrParser<T> {
pub fn new() -> Self {
Self(PhantomData)
}
}
impl<T> Clone for FromStrParser<T> {
fn clone(&self) -> Self {
Self(PhantomData)
}
}
impl<T> TypedValueParser for FromStrParser<T>
where
T: FromStr + Clone + Send + Sync + 'static,
T::Err: std::fmt::Display,
{
type Value = T;
fn parse_ref(
&self,
_: &clap::Command,
_: Option<&clap::Arg>,
value: &std::ffi::OsStr,
) -> Result<Self::Value, clap::Error> {
value
.to_string_lossy()
.parse()
.map_err(|e| clap::Error::raw(clap::error::ErrorKind::ValueValidation, e))
}
}

View File

@@ -1,58 +0,0 @@
use std::fs::File;
use std::path::{Path, PathBuf};
use patch_db::Value;
use serde::Deserialize;
use crate::prelude::*;
use crate::util::serde::IoFormat;
use crate::{Config, Error};
pub const DEVICE_CONFIG_PATH: &str = "/media/embassy/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/embassy/config.yaml";
pub const CONFIG_PATH_LOCAL: &str = ".embassy/config.yaml";
pub fn local_config_path() -> Option<PathBuf> {
if let Ok(home) = std::env::var("HOME") {
Some(Path::new(&home).join(CONFIG_PATH_LOCAL))
} else {
None
}
}
/// BLOCKING
pub fn load_config_from_paths<'a, T: for<'de> Deserialize<'de>>(
paths: impl IntoIterator<Item = impl AsRef<Path>>,
) -> Result<T, Error> {
let mut config = Default::default();
for path in paths {
if path.as_ref().exists() {
let format: IoFormat = path
.as_ref()
.extension()
.and_then(|s| s.to_str())
.map(|f| f.parse())
.transpose()?
.unwrap_or_default();
let new = format.from_reader(File::open(path)?)?;
config = merge_configs(config, new);
}
}
from_value(Value::Object(config))
}
pub fn merge_configs(mut first: Config, second: Config) -> Config {
for (k, v) in second.into_iter() {
let new = match first.remove(&k) {
None => v,
Some(old) => match (old, v) {
(Value::Object(first), Value::Object(second)) => {
Value::Object(merge_configs(first, second))
}
(first, _) => first,
},
};
first.insert(k, new);
}
first
}

View File

@@ -7,3 +7,119 @@ pub fn ed25519_expand_key(key: &SecretKey) -> [u8; EXPANDED_SECRET_KEY_LENGTH] {
)
.to_bytes()
}
use aes::cipher::{CipherKey, NewCipher, Nonce, StreamCipher};
use aes::Aes256Ctr;
use hmac::Hmac;
use josekit::jwk::Jwk;
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tracing::instrument;
pub fn pbkdf2(password: impl AsRef<[u8]>, salt: impl AsRef<[u8]>) -> CipherKey<Aes256Ctr> {
let mut aeskey = CipherKey::<Aes256Ctr>::default();
pbkdf2::pbkdf2::<Hmac<Sha256>>(
password.as_ref(),
salt.as_ref(),
1000,
aeskey.as_mut_slice(),
)
.unwrap();
aeskey
}
pub fn encrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec<u8> {
let prefix: [u8; 32] = rand::random();
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = Vec::with_capacity(32 + input.as_ref().len());
res.extend_from_slice(&prefix[..]);
res.extend_from_slice(input.as_ref());
aes.apply_keystream(&mut res[32..]);
res
}
pub fn decrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec<u8> {
if input.as_ref().len() < 32 {
return Vec::new();
}
let (prefix, rest) = input.as_ref().split_at(32);
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = rest.to_vec();
aes.apply_keystream(&mut res);
res
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct EncryptedWire {
encrypted: serde_json::Value,
}
impl EncryptedWire {
#[instrument(skip_all)]
pub fn decrypt(self, current_secret: impl AsRef<Jwk>) -> Option<String> {
let current_secret = current_secret.as_ref();
let decrypter = match josekit::jwe::alg::ecdh_es::EcdhEsJweAlgorithm::EcdhEs
.decrypter_from_jwk(current_secret)
{
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not setup awk");
tracing::debug!("{:?}", e);
return None;
}
};
let encrypted = match serde_json::to_string(&self.encrypted) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not deserialize");
tracing::debug!("{:?}", e);
return None;
}
};
let (decoded, _) = match josekit::jwe::deserialize_json(&encrypted, &decrypter) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not decrypt");
tracing::debug!("{:?}", e);
return None;
}
};
match String::from_utf8(decoded) {
Ok(a) => Some(a),
Err(e) => {
tracing::warn!("Could not decrypt into utf8");
tracing::debug!("{:?}", e);
return None;
}
}
}
}
/// We created this test by first making the private key, then restoring from this private key for recreatability.
/// After this the frontend then encoded an password, then we are testing that the output that we got (hand coded)
/// will be the shape we want.
#[test]
fn test_gen_awk() {
let private_key: Jwk = serde_json::from_str(
r#"{
"kty": "EC",
"crv": "P-256",
"d": "3P-MxbUJtEhdGGpBCRFXkUneGgdyz_DGZWfIAGSCHOU",
"x": "yHTDYSfjU809fkSv9MmN4wuojf5c3cnD7ZDN13n-jz4",
"y": "8Mpkn744A5KDag0DmX2YivB63srjbugYZzWc3JOpQXI"
}"#,
)
.unwrap();
let encrypted: EncryptedWire = serde_json::from_str(r#"{
"encrypted": { "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiRUNESC1FUyIsImtpZCI6ImgtZnNXUVh2Tm95dmJEazM5dUNsQ0NUdWc5N3MyZnJockJnWUVBUWVtclUiLCJlcGsiOnsia3R5IjoiRUMiLCJjcnYiOiJQLTI1NiIsIngiOiJmRkF0LXNWYWU2aGNkdWZJeUlmVVdUd3ZvWExaTkdKRHZIWVhIckxwOXNNIiwieSI6IjFvVFN6b00teHlFZC1SLUlBaUFHdXgzS1dJZmNYZHRMQ0JHLUh6MVkzY2sifX0", "iv": "NbwvfvWOdLpZfYRIZUrkcw", "ciphertext": "Zc5Br5kYOlhPkIjQKOLMJw", "tag": "EPoch52lDuCsbUUulzZGfg" }
}"#).unwrap();
assert_eq!(
"testing12345",
&encrypted.decrypt(std::sync::Arc::new(private_key)).unwrap()
);
}

View File

@@ -1,239 +0,0 @@
use std::net::Ipv4Addr;
use std::time::Duration;
use models::{Error, ErrorKind, PackageId, ResultExt, Version};
use nix::sys::signal::Signal;
use tokio::process::Command;
use crate::util::Invoke;
#[cfg(feature = "docker")]
pub const CONTAINER_TOOL: &str = "docker";
#[cfg(not(feature = "docker"))]
pub const CONTAINER_TOOL: &str = "podman";
#[cfg(feature = "docker")]
pub const CONTAINER_DATADIR: &str = "/var/lib/docker";
#[cfg(not(feature = "docker"))]
pub const CONTAINER_DATADIR: &str = "/var/lib/containers";
pub struct DockerImageSha(String);
// docker images start9/${package}/*:${version} -q --no-trunc
pub async fn images_for(
package: &PackageId,
version: &Version,
) -> Result<Vec<DockerImageSha>, Error> {
Ok(String::from_utf8(
Command::new(CONTAINER_TOOL)
.arg("images")
.arg(format!("start9/{package}/*:{version}"))
.arg("--no-trunc")
.arg("-q")
.invoke(ErrorKind::Docker)
.await?,
)?
.lines()
.map(|l| DockerImageSha(l.trim().to_owned()))
.collect())
}
// docker rmi -f ${sha}
pub async fn remove_image(sha: &DockerImageSha) -> Result<(), Error> {
match Command::new(CONTAINER_TOOL)
.arg("rmi")
.arg("-f")
.arg(&sha.0)
.invoke(ErrorKind::Docker)
.await
.map(|_| ())
{
Err(e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such image") =>
{
Ok(())
}
a => a,
}?;
Ok(())
}
// docker image prune -f
pub async fn prune_images() -> Result<(), Error> {
Command::new(CONTAINER_TOOL)
.arg("image")
.arg("prune")
.arg("-f")
.invoke(ErrorKind::Docker)
.await?;
Ok(())
}
// docker container inspect ${name} --format '{{.NetworkSettings.Networks.start9.IPAddress}}'
pub async fn get_container_ip(name: &str) -> Result<Option<Ipv4Addr>, Error> {
match Command::new(CONTAINER_TOOL)
.arg("container")
.arg("inspect")
.arg(name)
.arg("--format")
.arg("{{.NetworkSettings.Networks.start9.IPAddress}}")
.invoke(ErrorKind::Docker)
.await
{
Err(e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
Ok(None)
}
Err(e) => Err(e),
Ok(a) => {
let out = std::str::from_utf8(&a)?.trim();
if out.is_empty() {
Ok(None)
} else {
Ok(Some({
out.parse()
.with_ctx(|_| (ErrorKind::ParseNetAddress, out.to_string()))?
}))
}
}
}
}
// docker stop -t ${timeout} -s ${signal} ${name}
pub async fn stop_container(
name: &str,
timeout: Option<Duration>,
signal: Option<Signal>,
) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("stop");
if let Some(dur) = timeout {
cmd.arg("-t").arg(dur.as_secs().to_string());
}
if let Some(sig) = signal {
cmd.arg("-s").arg(sig.to_string());
}
cmd.arg(name);
match cmd.invoke(ErrorKind::Docker).await {
Ok(_) => Ok(()),
Err(mut e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
e.kind = ErrorKind::NotFound;
Err(e)
}
Err(e) => Err(e),
}
}
// docker kill -s ${signal} ${name}
pub async fn kill_container(name: &str, signal: Option<Signal>) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("kill");
if let Some(sig) = signal {
cmd.arg("-s").arg(sig.to_string());
}
cmd.arg(name);
match cmd.invoke(ErrorKind::Docker).await {
Ok(_) => Ok(()),
Err(mut e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
e.kind = ErrorKind::NotFound;
Err(e)
}
Err(e) => Err(e),
}
}
// docker pause ${name}
pub async fn pause_container(name: &str) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("pause");
cmd.arg(name);
match cmd.invoke(ErrorKind::Docker).await {
Ok(_) => Ok(()),
Err(mut e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
e.kind = ErrorKind::NotFound;
Err(e)
}
Err(e) => Err(e),
}
}
// docker unpause ${name}
pub async fn unpause_container(name: &str) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("unpause");
cmd.arg(name);
match cmd.invoke(ErrorKind::Docker).await {
Ok(_) => Ok(()),
Err(mut e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
e.kind = ErrorKind::NotFound;
Err(e)
}
Err(e) => Err(e),
}
}
// docker rm -f ${name}
pub async fn remove_container(name: &str, force: bool) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("rm");
if force {
cmd.arg("-f");
}
cmd.arg(name);
match cmd.invoke(ErrorKind::Docker).await {
Ok(_) => Ok(()),
Err(e)
if e.source
.to_string()
.to_ascii_lowercase()
.contains("no such container") =>
{
Ok(())
}
Err(e) => Err(e),
}
}
// docker network create -d bridge --subnet ${subnet} --opt com.podman.network.bridge.name=${bridge_name}
pub async fn create_bridge_network(
name: &str,
subnet: &str,
bridge_name: &str,
) -> Result<(), Error> {
let mut cmd = Command::new(CONTAINER_TOOL);
cmd.arg("network").arg("create");
cmd.arg("-d").arg("bridge");
cmd.arg("--subnet").arg(subnet);
cmd.arg("--opt")
.arg(format!("com.docker.network.bridge.name={bridge_name}"));
cmd.arg(name);
cmd.invoke(ErrorKind::Docker).await?;
Ok(())
}

View File

@@ -0,0 +1,119 @@
use std::pin::Pin;
use std::task::{Context, Poll};
use futures::future::abortable;
use futures::stream::{AbortHandle, Abortable};
use futures::{Future, FutureExt};
use tokio::sync::watch;
#[pin_project::pin_project(PinnedDrop)]
pub struct DropSignaling<F> {
#[pin]
fut: F,
on_drop: watch::Sender<bool>,
}
impl<F> DropSignaling<F> {
pub fn new(fut: F) -> Self {
Self {
fut,
on_drop: watch::channel(false).0,
}
}
pub fn subscribe(&self) -> DropHandle {
DropHandle(self.on_drop.subscribe())
}
}
impl<F> Future for DropSignaling<F>
where
F: Future,
{
type Output = F::Output;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
this.fut.poll(cx)
}
}
#[pin_project::pinned_drop]
impl<F> PinnedDrop for DropSignaling<F> {
fn drop(self: Pin<&mut Self>) {
let _ = self.on_drop.send(true);
}
}
#[derive(Clone)]
pub struct DropHandle(watch::Receiver<bool>);
impl DropHandle {
pub async fn wait(&mut self) {
self.0.wait_for(|a| *a).await;
}
}
#[pin_project::pin_project]
pub struct RemoteCancellable<F> {
#[pin]
fut: Abortable<DropSignaling<F>>,
on_drop: DropHandle,
handle: AbortHandle,
}
impl<F: Future> RemoteCancellable<F> {
pub fn new(fut: F) -> Self {
let sig_fut = DropSignaling::new(fut);
let on_drop = sig_fut.subscribe();
let (fut, handle) = abortable(sig_fut);
Self {
fut,
on_drop,
handle,
}
}
}
impl<F> RemoteCancellable<F> {
pub fn cancellation_handle(&self) -> CancellationHandle {
CancellationHandle {
on_drop: self.on_drop.clone(),
handle: self.handle.clone(),
}
}
}
impl<F> Future for RemoteCancellable<F>
where
F: Future,
{
type Output = Option<F::Output>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
this.fut.poll(cx).map(|a| a.ok())
}
}
#[derive(Clone)]
pub struct CancellationHandle {
on_drop: DropHandle,
handle: AbortHandle,
}
impl CancellationHandle {
pub fn cancel(&mut self) {
self.handle.abort();
}
pub async fn cancel_and_wait(&mut self) {
self.handle.abort();
self.on_drop.wait().await
}
}
#[tokio::test]
async fn test_cancellable() {
use std::sync::Arc;
let arc = Arc::new(());
let weak = Arc::downgrade(&arc);
let cancellable = RemoteCancellable::new(async move {
futures::future::pending::<()>().await;
drop(arc)
});
let mut handle = cancellable.cancellation_handle();
tokio::spawn(cancellable);
handle.cancel_and_wait().await;
assert!(weak.strong_count() == 0);
}

View File

@@ -6,11 +6,11 @@ use std::io::Error as StdIOError;
use std::pin::Pin;
use std::task::{Context, Poll};
use bytes::Bytes;
use color_eyre::eyre::eyre;
use futures::Stream;
use http::header::{ACCEPT_RANGES, CONTENT_LENGTH, RANGE};
use hyper::body::Bytes;
use pin_project::pin_project;
use reqwest::header::{ACCEPT_RANGES, CONTENT_LENGTH, RANGE};
use reqwest::{Client, Url};
use tokio::io::{AsyncRead, AsyncSeek};
@@ -359,22 +359,3 @@ async fn main_test() {
assert_eq!(buf.len(), test_reader.total_bytes)
}
#[tokio::test]
#[ignore]
async fn s9pk_test() {
use tokio::io::BufReader;
let http_url = Url::parse("http://qhc6ac47cytstejcepk2ia3ipadzjhlkc5qsktsbl4e7u2krfmfuaqqd.onion/content/files/2022/09/ghost.s9pk").unwrap();
println!("Getting this resource: {}", http_url);
let test_reader =
BufReader::with_capacity(1024 * 1024, HttpReader::new(http_url).await.unwrap());
let mut s9pk = crate::s9pk::reader::S9pkReader::from_reader(test_reader, false)
.await
.unwrap();
let manifest = s9pk.manifest().await.unwrap();
assert_eq!(&manifest.id.to_string(), "ghost");
}

View File

@@ -1,7 +1,7 @@
use std::future::Future;
use std::io::Cursor;
use std::os::unix::prelude::MetadataExt;
use std::path::Path;
use std::path::{Path, PathBuf};
use std::sync::atomic::AtomicU64;
use std::task::Poll;
use std::time::Duration;
@@ -10,13 +10,14 @@ use futures::future::{BoxFuture, Fuse};
use futures::{AsyncSeek, FutureExt, TryStreamExt};
use helpers::NonDetachingJoinHandle;
use nix::unistd::{Gid, Uid};
use tokio::fs::File;
use tokio::io::{
duplex, AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, DuplexStream, ReadBuf, WriteHalf,
};
use tokio::net::TcpStream;
use tokio::time::{Instant, Sleep};
use crate::ResultExt;
use crate::prelude::*;
pub trait AsyncReadSeek: AsyncRead + AsyncSeek {}
impl<T: AsyncRead + AsyncSeek> AsyncReadSeek for T {}
@@ -669,3 +670,77 @@ impl<S: AsyncRead + AsyncWrite> AsyncWrite for TimeoutStream<S> {
res
}
}
pub struct TmpFile {}
#[derive(Debug)]
pub struct TmpDir {
path: PathBuf,
}
impl TmpDir {
pub async fn new() -> Result<Self, Error> {
let path = Path::new("/var/tmp/startos").join(base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&rand::random::<[u8; 8]>(),
));
if tokio::fs::metadata(&path).await.is_ok() {
return Err(Error::new(
eyre!("{path:?} already exists"),
ErrorKind::Filesystem,
));
}
tokio::fs::create_dir_all(&path).await?;
Ok(Self { path })
}
pub async fn delete(self) -> Result<(), Error> {
tokio::fs::remove_dir_all(&self.path).await?;
Ok(())
}
}
impl std::ops::Deref for TmpDir {
type Target = Path;
fn deref(&self) -> &Self::Target {
&self.path
}
}
impl AsRef<Path> for TmpDir {
fn as_ref(&self) -> &Path {
&*self
}
}
impl Drop for TmpDir {
fn drop(&mut self) {
if self.path.exists() {
let path = std::mem::take(&mut self.path);
tokio::spawn(async move {
tokio::fs::remove_dir_all(&path).await.unwrap();
});
}
}
}
pub async fn create_file(path: impl AsRef<Path>) -> Result<File, Error> {
let path = path.as_ref();
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("mkdir -p {parent:?}")))?;
}
File::create(path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("create {path:?}")))
}
pub async fn rename(src: impl AsRef<Path>, dst: impl AsRef<Path>) -> Result<(), Error> {
let src = src.as_ref();
let dst = dst.as_ref();
if let Some(parent) = dst.parent() {
tokio::fs::create_dir_all(parent)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("mkdir -p {parent:?}")))?;
}
tokio::fs::rename(src, dst)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("mv {src:?} -> {dst:?}")))
}

View File

@@ -9,11 +9,11 @@ use std::task::{Context, Poll};
use std::time::Duration;
use async_trait::async_trait;
use clap::ArgMatches;
use color_eyre::eyre::{self, eyre};
use fd_lock_rs::FdLock;
use helpers::canonicalize;
pub use helpers::NonDetachingJoinHandle;
use imbl_value::InternedString;
use lazy_static::lazy_static;
pub use models::Version;
use pin_project::pin_project;
@@ -24,14 +24,16 @@ use tracing::instrument;
use crate::shutdown::Shutdown;
use crate::{Error, ErrorKind, ResultExt as _};
pub mod config;
pub mod actor;
pub mod clap;
pub mod cpupower;
pub mod crypto;
pub mod docker;
pub mod future;
pub mod http_reader;
pub mod io;
pub mod logger;
pub mod lshw;
pub mod rpc_client;
pub mod serde;
#[derive(Clone, Copy, Debug, ::serde::Deserialize, ::serde::Serialize)]
@@ -48,8 +50,12 @@ impl std::fmt::Display for Never {
}
}
impl std::error::Error for Never {}
impl<T: ?Sized> AsRef<T> for Never {
fn as_ref(&self) -> &T {
match *self {}
}
}
#[async_trait::async_trait]
pub trait Invoke<'a> {
type Extended<'ext>
where
@@ -60,7 +66,10 @@ pub trait Invoke<'a> {
&'ext mut self,
input: Option<&'ext mut Input>,
) -> Self::Extended<'ext>;
async fn invoke(&mut self, error_kind: crate::ErrorKind) -> Result<Vec<u8>, Error>;
fn invoke(
&mut self,
error_kind: crate::ErrorKind,
) -> impl Future<Output = Result<Vec<u8>, Error>> + Send;
}
pub struct ExtendedCommand<'a> {
@@ -80,7 +89,6 @@ impl<'a> std::ops::DerefMut for ExtendedCommand<'a> {
}
}
#[async_trait::async_trait]
impl<'a> Invoke<'a> for tokio::process::Command {
type Extended<'ext> = ExtendedCommand<'ext>
where
@@ -118,7 +126,6 @@ impl<'a> Invoke<'a> for tokio::process::Command {
}
}
#[async_trait::async_trait]
impl<'a> Invoke<'a> for ExtendedCommand<'a> {
type Extended<'ext> = &'ext mut ExtendedCommand<'ext>
where
@@ -146,7 +153,7 @@ impl<'a> Invoke<'a> for ExtendedCommand<'a> {
}
self.cmd.stdout(Stdio::piped());
self.cmd.stderr(Stdio::piped());
let mut child = self.cmd.spawn()?;
let mut child = self.cmd.spawn().with_kind(error_kind)?;
if let (Some(mut stdin), Some(input)) = (child.stdin.take(), self.input.take()) {
use tokio::io::AsyncWriteExt;
tokio::io::copy(input, &mut stdin).await?;
@@ -275,8 +282,6 @@ impl<W: std::fmt::Write> std::io::Write for FmtWriter<W> {
}
}
pub fn display_none<T>(_: T, _: &ArgMatches) {}
pub struct Container<T>(RwLock<Option<T>>);
impl<T> Container<T> {
pub fn new(value: Option<T>) -> Self {
@@ -490,3 +495,13 @@ impl<'a, T> From<&'a T> for MaybeOwned<'a, T> {
MaybeOwned::Borrowed(value)
}
}
pub fn new_guid() -> InternedString {
use rand::RngCore;
let mut buf = [0; 40];
rand::thread_rng().fill_bytes(&mut buf);
InternedString::intern(base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&buf,
))
}

View File

@@ -0,0 +1,227 @@
use std::collections::BTreeMap;
use std::path::PathBuf;
use std::sync::atomic::AtomicUsize;
use std::sync::{Arc, Weak};
use futures::future::BoxFuture;
use futures::{FutureExt, TryFutureExt};
use helpers::NonDetachingJoinHandle;
use lazy_async_pool::Pool;
use models::{Error, ErrorKind, ResultExt};
use rpc_toolkit::yajrc::{self, Id, RpcError, RpcMethod, RpcRequest, RpcResponse};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, AsyncRead, AsyncWrite, AsyncWriteExt, BufReader};
use tokio::net::UnixStream;
use tokio::runtime::Handle;
use tokio::sync::{oneshot, Mutex, OnceCell};
use crate::util::io::TmpDir;
type DynWrite = Box<dyn AsyncWrite + Unpin + Send + Sync + 'static>;
type ResponseMap = BTreeMap<Id, oneshot::Sender<Result<Value, RpcError>>>;
const MAX_TRIES: u64 = 3;
pub struct RpcClient {
id: Arc<AtomicUsize>,
handler: NonDetachingJoinHandle<()>,
writer: DynWrite,
responses: Weak<Mutex<ResponseMap>>,
}
impl RpcClient {
pub fn new<
W: AsyncWrite + Unpin + Send + Sync + 'static,
R: AsyncRead + Unpin + Send + Sync + 'static,
>(
writer: W,
reader: R,
id: Arc<AtomicUsize>,
) -> Self {
let writer: DynWrite = Box::new(writer);
let responses = Arc::new(Mutex::new(ResponseMap::new()));
let weak_responses = Arc::downgrade(&responses);
RpcClient {
id,
handler: tokio::spawn(async move {
let mut lines = BufReader::new(reader).lines();
while let Some(line) = lines.next_line().await.transpose() {
match line.map_err(Error::from).and_then(|l| {
serde_json::from_str::<RpcResponse>(dbg!(&l))
.with_kind(ErrorKind::Deserialization)
}) {
Ok(l) => {
if let Some(id) = l.id {
if let Some(res) = responses.lock().await.remove(&id) {
if let Err(e) = res.send(l.result) {
tracing::warn!(
"RpcClient Response after request aborted: {:?}",
e
);
}
} else {
tracing::warn!(
"RpcClient Response for Unknown ID: {:?}",
l.result
);
}
} else {
tracing::info!("RpcClient Notification: {:?}", l);
}
}
Err(e) => {
tracing::error!("RpcClient Error: {}", e);
tracing::debug!("{:?}", e);
}
}
}
for (_, res) in std::mem::take(&mut *responses.lock().await) {
if let Err(e) = res.send(Err(RpcError {
data: Some("client disconnected before response received".into()),
..yajrc::INTERNAL_ERROR
})) {
tracing::warn!("RpcClient Response after request aborted: {:?}", e);
}
}
})
.into(),
writer,
responses: weak_responses,
}
}
pub async fn request<T: RpcMethod>(
&mut self,
method: T,
params: T::Params,
) -> Result<T::Response, RpcError>
where
T: Serialize,
T::Params: Serialize,
T::Response: for<'de> Deserialize<'de>,
{
let id = Id::Number(
self.id
.fetch_add(1, std::sync::atomic::Ordering::SeqCst)
.into(),
);
let request = RpcRequest {
id: Some(id.clone()),
method,
params,
};
if let Some(w) = self.responses.upgrade() {
let (send, recv) = oneshot::channel();
w.lock().await.insert(id.clone(), send);
self.writer
.write_all((dbg!(serde_json::to_string(&request))? + "\n").as_bytes())
.await
.map_err(|e| {
let mut err = rpc_toolkit::yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(e.to_string()));
err
})?;
match recv.await {
Ok(val) => {
return Ok(serde_json::from_value(val?)?);
}
Err(_err) => {
tokio::task::yield_now().await;
}
}
}
tracing::debug!(
"Client has finished {:?}",
futures::poll!(&mut self.handler)
);
let mut err = rpc_toolkit::yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!("RpcClient thread has terminated"));
Err(err)
}
}
#[derive(Clone)]
pub struct UnixRpcClient {
pool: Pool<
RpcClient,
Box<dyn Fn() -> BoxFuture<'static, Result<RpcClient, std::io::Error>> + Send + Sync>,
BoxFuture<'static, Result<RpcClient, std::io::Error>>,
std::io::Error,
>,
}
impl UnixRpcClient {
pub fn new(path: PathBuf) -> Self {
let tmpdir = Arc::new(OnceCell::new());
let rt = Handle::current();
let id = Arc::new(AtomicUsize::new(0));
Self {
pool: Pool::new(
0,
Box::new(move || {
let mut path = path.clone();
let id = id.clone();
let tmpdir = tmpdir.clone();
NonDetachingJoinHandle::from(rt.spawn(async move {
if path.as_os_str().len() >= 108
// libc::sockaddr_un.sun_path.len()
{
let new_path = tmpdir
.get_or_try_init(|| TmpDir::new())
.await
.map_err(|e| {
std::io::Error::new(std::io::ErrorKind::Other, e.source)
})?
.join("link.sock");
if tokio::fs::metadata(&new_path).await.is_err() {
tokio::fs::symlink(&path, &new_path).await?;
}
path = new_path;
}
let (r, w) = UnixStream::connect(&path).await?.into_split();
Ok(RpcClient::new(w, r, id))
}))
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))
.and_then(|x| async move { x })
.boxed()
}),
),
}
}
pub async fn request<T: RpcMethod>(
&self,
method: T,
params: T::Params,
) -> Result<T::Response, RpcError>
where
T: Serialize + Clone,
T::Params: Serialize + Clone,
T::Response: for<'de> Deserialize<'de>,
{
let mut tries = 0;
let res = loop {
let mut client = self.pool.clone().get().await?;
if client.handler.is_finished() {
client.destroy();
continue;
}
let res = client.request(method.clone(), params.clone()).await;
match &res {
Err(e) if e.code == rpc_toolkit::yajrc::INTERNAL_ERROR.code => {
let mut e = Error::from(e.clone());
e.kind = ErrorKind::Filesystem;
tracing::error!("{e}");
tracing::debug!("{e:?}");
client.destroy();
}
_ => break res,
}
tries += 1;
if tries > MAX_TRIES {
tracing::warn!("Max Tries exceeded");
break res;
}
};
res
}
}

View File

@@ -1,15 +1,21 @@
use std::any::TypeId;
use std::collections::VecDeque;
use std::marker::PhantomData;
use std::ops::Deref;
use std::process::exit;
use std::str::FromStr;
use clap::ArgMatches;
use clap::builder::ValueParserFactory;
use clap::{ArgMatches, CommandFactory, FromArgMatches};
use color_eyre::eyre::eyre;
use imbl::OrdMap;
use rpc_toolkit::{AnyContext, Handler, HandlerArgs, HandlerArgsFor, HandlerTypes, PrintCliResult};
use serde::de::DeserializeOwned;
use serde::ser::{SerializeMap, SerializeSeq};
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde_json::Value;
use super::IntoDoubleEndedIterator;
use crate::util::clap::FromStrParser;
use crate::{Error, ResultExt};
pub fn deserialize_from_str<
@@ -266,7 +272,7 @@ impl<'de> serde::de::Deserialize<'de> for ValuePrimative {
}
}
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize, PartialEq, Eq, PartialOrd, Ord)]
#[serde(rename_all = "kebab-case")]
pub enum IoFormat {
Json,
@@ -425,36 +431,207 @@ impl IoFormat {
}
}
pub fn display_serializable<T: Serialize>(t: T, matches: &ArgMatches) {
let format = match matches.value_of("format").map(|f| f.parse()) {
Some(Ok(f)) => f,
Some(Err(_)) => {
eprintln!("unrecognized formatter");
exit(1)
}
None => IoFormat::default(),
};
pub fn display_serializable<T: Serialize>(format: IoFormat, result: T) {
format
.to_writer(std::io::stdout(), &t)
.expect("Error serializing result to stdout")
.to_writer(std::io::stdout(), &result)
.expect("Error serializing result to stdout");
if format == IoFormat::JsonPretty {
println!()
}
}
pub fn parse_stdin_deserializable<T: for<'de> Deserialize<'de>>(
stdin: &mut std::io::Stdin,
matches: &ArgMatches,
) -> Result<T, Error> {
let format = match matches.value_of("format").map(|f| f.parse()) {
Some(Ok(f)) => f,
Some(Err(_)) => {
eprintln!("unrecognized formatter");
exit(1)
#[derive(Deserialize, Serialize)]
pub struct WithIoFormat<T> {
pub format: Option<IoFormat>,
#[serde(flatten)]
pub rest: T,
}
impl<T: FromArgMatches> FromArgMatches for WithIoFormat<T> {
fn from_arg_matches(matches: &ArgMatches) -> Result<Self, clap::Error> {
Ok(Self {
rest: T::from_arg_matches(matches)?,
format: matches.get_one("format").copied(),
})
}
fn update_from_arg_matches(&mut self, matches: &ArgMatches) -> Result<(), clap::Error> {
self.rest.update_from_arg_matches(matches)?;
self.format = matches.get_one("format").copied();
Ok(())
}
}
impl<T: CommandFactory> CommandFactory for WithIoFormat<T> {
fn command() -> clap::Command {
let cmd = T::command();
if !cmd.get_arguments().any(|a| a.get_id() == "format") {
cmd.arg(
clap::Arg::new("format")
.long("format")
.value_parser(|s: &str| s.parse::<IoFormat>().map_err(|e| eyre!("{e}"))),
)
} else {
cmd
}
None => IoFormat::default(),
};
format.from_reader(stdin)
}
fn command_for_update() -> clap::Command {
let cmd = T::command_for_update();
if !cmd.get_arguments().any(|a| a.get_id() == "format") {
cmd.arg(
clap::Arg::new("format")
.long("format")
.value_parser(|s: &str| s.parse::<IoFormat>().map_err(|e| eyre!("{e}"))),
)
} else {
cmd
}
}
}
#[derive(Debug, Clone, Copy)]
pub trait HandlerExtSerde: Handler {
fn with_display_serializable(self) -> DisplaySerializable<Self>;
}
impl<T: Handler> HandlerExtSerde for T {
fn with_display_serializable(self) -> DisplaySerializable<Self> {
DisplaySerializable(self)
}
}
#[derive(Debug, Clone)]
pub struct DisplaySerializable<T>(pub T);
impl<T: HandlerTypes> HandlerTypes for DisplaySerializable<T> {
type Params = WithIoFormat<T::Params>;
type InheritedParams = T::InheritedParams;
type Ok = T::Ok;
type Err = T::Err;
}
#[async_trait::async_trait]
impl<T: Handler> Handler for DisplaySerializable<T> {
type Context = T::Context;
fn handle_sync(
&self,
HandlerArgs {
context,
parent_method,
method,
params,
inherited_params,
raw_params,
}: HandlerArgsFor<Self::Context, Self>,
) -> Result<Self::Ok, Self::Err> {
self.0.handle_sync(HandlerArgs {
context,
parent_method,
method,
params: params.rest,
inherited_params,
raw_params,
})
}
async fn handle_async(
&self,
HandlerArgs {
context,
parent_method,
method,
params,
inherited_params,
raw_params,
}: HandlerArgsFor<Self::Context, Self>,
) -> Result<Self::Ok, Self::Err> {
self.0
.handle_async(HandlerArgs {
context,
parent_method,
method,
params: params.rest,
inherited_params,
raw_params,
})
.await
}
fn contexts(&self) -> Option<imbl::OrdSet<std::any::TypeId>> {
self.0.contexts()
}
fn metadata(
&self,
method: VecDeque<&'static str>,
ctx_ty: TypeId,
) -> OrdMap<&'static str, imbl_value::Value> {
self.0.metadata(method, ctx_ty)
}
fn method_from_dots(&self, method: &str, ctx_ty: TypeId) -> Option<VecDeque<&'static str>> {
self.0.method_from_dots(method, ctx_ty)
}
}
impl<T: HandlerTypes> PrintCliResult for DisplaySerializable<T>
where
T::Ok: Serialize,
{
type Context = AnyContext;
fn print(
&self,
HandlerArgs { params, .. }: HandlerArgsFor<Self::Context, Self>,
result: Self::Ok,
) -> Result<(), Self::Err> {
display_serializable(params.format.unwrap_or_default(), result);
Ok(())
}
}
#[derive(Deserialize, Serialize)]
pub struct StdinDeserializable<T>(pub T);
impl<T> FromArgMatches for StdinDeserializable<T>
where
T: DeserializeOwned,
{
fn from_arg_matches(matches: &ArgMatches) -> Result<Self, clap::Error> {
let format = matches
.get_one::<IoFormat>("format")
.copied()
.unwrap_or_default();
Ok(Self(format.from_reader(&mut std::io::stdin()).map_err(
|e| clap::Error::raw(clap::error::ErrorKind::ValueValidation, e),
)?))
}
fn update_from_arg_matches(&mut self, matches: &ArgMatches) -> Result<(), clap::Error> {
let format = matches
.get_one::<IoFormat>("format")
.copied()
.unwrap_or_default();
self.0 = format
.from_reader(&mut std::io::stdin())
.map_err(|e| clap::Error::raw(clap::error::ErrorKind::ValueValidation, e))?;
Ok(())
}
}
impl<T> clap::Args for StdinDeserializable<T>
where
T: DeserializeOwned,
{
fn augment_args(cmd: clap::Command) -> clap::Command {
if !cmd.get_arguments().any(|a| a.get_id() == "format") {
cmd.arg(
clap::Arg::new("format")
.long("format")
.value_parser(|s: &str| s.parse::<IoFormat>().map_err(|e| eyre!("{e}"))),
)
} else {
cmd
}
}
fn augment_args_for_update(cmd: clap::Command) -> clap::Command {
if !cmd.get_arguments().any(|a| a.get_id() == "format") {
cmd.arg(
clap::Arg::new("format")
.long("format")
.value_parser(|s: &str| s.parse::<IoFormat>().map_err(|e| eyre!("{e}"))),
)
} else {
cmd
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct Duration(std::time::Duration);
impl Deref for Duration {
type Target = std::time::Duration;
@@ -518,6 +695,12 @@ impl std::str::FromStr for Duration {
}))
}
}
impl ValueParserFactory for Duration {
type Parser = FromStrParser<Self>;
fn value_parser() -> Self::Parser {
FromStrParser::new()
}
}
impl std::fmt::Display for Duration {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let nanos = self.as_nanos();
@@ -843,3 +1026,67 @@ impl Serialize for Regex {
serialize_display(&self.0, serializer)
}
}
// TODO: make this not allocate
#[derive(Debug)]
pub struct NoOutput;
impl<'de> Deserialize<'de> for NoOutput {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let _ = Value::deserialize(deserializer);
Ok(NoOutput)
}
}
pub fn apply_expr(input: jaq_core::Val, expr: &str) -> Result<jaq_core::Val, Error> {
let (expr, errs) = jaq_core::parse::parse(expr, jaq_core::parse::main());
let Some(expr) = expr else {
return Err(Error::new(
eyre!("Failed to parse expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let mut errs = Vec::new();
let mut defs = jaq_core::Definitions::core();
for def in jaq_std::std() {
defs.insert(def, &mut errs);
}
let filter = defs.finish(expr, Vec::new(), &mut errs);
if !errs.is_empty() {
return Err(Error::new(
eyre!("Failed to compile expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let inputs = jaq_core::RcIter::new(std::iter::empty());
let mut res_iter = filter.run(jaq_core::Ctx::new([], &inputs), input);
let Some(res) = res_iter
.next()
.transpose()
.map_err(|e| eyre!("{e}"))
.with_kind(crate::ErrorKind::Deserialization)?
else {
return Err(Error::new(
eyre!("expr returned no results"),
crate::ErrorKind::InvalidRequest,
));
};
if res_iter.next().is_some() {
return Err(Error::new(
eyre!("expr returned too many results"),
crate::ErrorKind::InvalidRequest,
));
}
Ok(res)
}