Skip to content

Transports

LogPot uses Transports to manage the delivery of log entries to various sinks (console, files, HTTP endpoints, etc.). Transports derive from a common abstract base class and share a consistent set of behaviors and configuration options.


All transports extend the abstract Transport<Levels, Options>:

abstract class Transport<Levels extends Record<string, number>, Options extends TransportOptions<Levels>> {
protected options: Options;
protected formatter: Formatter<Levels>;
protected levelDefinition: LevelDefinition<Levels>;
private isClosing: boolean;
private isClosed: boolean;
private queue?: AsyncJobQueue;
private worker?: Worker;
constructor(levelDefinition: LevelDefinition<Levels>, options?: Options) { … }
/** Public API: enqueue or process a single log record. */
log(log: Log<Levels>): void { … }
/** Flush pending logs and wait for completion. */
flushAndWait(isClosing?: boolean): Promise<boolean | undefined> { … }
/** Gracefully close: flush, terminate worker, release resources. */
close(): Promise<void> { … }
// Must be implemented by subclasses:
protected abstract doLog(log: Log<Levels>): void;
protected abstract flush(): void;
protected abstract doFlushAndWait(): Promise<void>;
protected abstract doClose(): Promise<void>;
protected abstract onRunAsWorker(): void;
// Worker‑thread support, error handling, stats, etc.
}
OptionTypeDefaultDescription
namestringclass nameHuman‑readable transport name.
runAsWorkerbooleanfalseOffload processing into a worker_threads Worker.
logLevelnumber0 (TRACE)Minimum numeric severity to accept; lower entries are dropped.
levelsLevelName<Levels>[]all defined levelsWhitelist of level names to include.
categoriesSet<string>no filteringOnly logs whose meta.category matches are emitted.
filter(log)=>booleannoneCustom predicate to drop or accept logs before formatting.
encodingBufferEncoding'utf8'Character encoding for raw writes.
formatterFormatterOptions<Levels>printer defaultHow to format batches into strings or Buffers.
transformer(log)=>Log|nullnoneMap or drop logs right before transport.
onError(TransportError<Levels>)=>voidnoneCallback when delivery or flush fails.
contextRecord<string, unknown>{}Static metadata merged into each log entry.
Worker settings
worker?.urlURLinternal script URLOverride URL for worker script.
worker?.custom()=>WorkernoneFactory function to instantiate a worker.
worker?.readyTimeoutnumber (ms)30000Timeout waiting for worker “ready” handshake.
worker?.closeTimeoutnumber (ms)60000Timeout waiting for worker “closed” handshake.

When anything goes wrong (I/O failure, HTTP error, serialization, etc.), transports wrap details in a TransportError:

interface TransportError<Levels> {
err: SerializedError // fully‑serialized Error or thrown value
data?: unknown // raw payload sent or written
log?: Log<Levels> // the single log entry that triggered failure
batch?: Log<Levels>[] // batch that was being processed
attempt?: number // current retry attempt
retryCount?: number // max configured retries
transport?: string // transport name
}
  • Serialization uses ErrorSerialization options (stack, cause, aggregated, depth).
  • onError callback is invoked with this object for custom error handling or alerting.

Writes logs to stdout (or stderr) via console.log() or process.stdout.write().

class ConsoleTransport<Levels> extends Transport<Levels, ConsoleTransportOptions<Levels>> {
constructor(levelDef, options?) { … }
protected doLog(log: Log<Levels>): void { … }
protected flush(): void { /* no buffering by default */ }
protected async doFlushAndWait(): Promise<void> { /* immediate */ }
protected async doClose(): Promise<void> { /* nothing to close */ }
/** Renders a single record or batch via its `Formatter`. */
protected format(log: Log<Levels>): string | Buffer { … }
}
OptionTypeDefaultDescription
useJobQueuebooleanfalseEnqueue writes via AsyncJobQueue
concurrencynumber20Max parallel writes when useJobQueue=true
  1. Log → checks logLevel / filters → format() → writes to stdout.
  2. Flush / Close are no‑ops (or immediate).

Appends logs to a filesystem path, with batching, rotation, retention, and retry.

class FileTransport<Levels> extends Transport<Levels, FileTransportOptions<Levels>> {
constructor(levelDef, opts) { … }
protected doLog(log: Log<Levels>): void { queue & batch }
protected startFlushTimer(): void { … }
protected stopFlushTimer(): void { … }
protected flush(): void { /* write buffered batch */ }
protected async doFlushAndWait(): Promise<void> { /* wait until flush completes */ }
protected async doClose(): Promise<void> { /* flush & close streams */ }
protected onRunAsWorker(): void { /* stop timer */ }
protected format(batch: Log<Levels>[]): string { JSON or printer output }
}
OptionTypeDefaultDescription
filenamestringrequiredPath to log file.
flagsstring'a'File open flags ('a' append, 'w' write).
modenumber0o644File permission bits.
concurrencynumber20Max parallel I/O tasks.
batchSizenumber100Entries buffered per write.
flushIntervalnumber (ms)5000Auto‑flush timer interval.
rotateRotationOptionsnone{ interval, maxSize, maxFiles, compress }
retryRetryOption{…}Retries on write failures: maxRetry, baseDelay, maxDelay
  • Interval: daily/hourly rollovers.
  • Max size: rotate when file exceeds threshold.
  • Compression: optional gzip old files.
  • Max files: trim oldest.

Batches logs and sends via HTTP (fetch or http.request) to remote services.

class HttpTransport<Levels> extends Transport<Levels, HttpTransportOptions<Levels>> {
constructor(levelDef, opts) { … }
protected doLog(log: Log<Levels>): void { buffer.push(log); }
protected startFlushTimer(): void { … }
protected stopFlushTimer(): void { … }
protected flush(): void { this.sendBatch(buffered) }
protected async doFlushAndWait(): Promise<void> { /* await in‑flight requests */ }
protected async doClose(): Promise<void> { /* flush & wait */ }
protected async sendBatch(batch: Log<Levels>[]): Promise<void> { /* HTTP call */ }
protected onRunAsWorker(): void { /* stop timer */ }
protected format(batch: Log<Levels>): string|Buffer { JSON or NDJSON }
}
OptionTypeDefaultDescription
urlstringrequiredEndpoint URL.
method'POST' | 'PUT''POST'HTTP verb.
headersRecord<string,string>{ 'Content-Type': 'application/json' }Custom headers.
batchSizenumber100Logs per HTTP payload.
flushIntervalnumber (ms)5000Timer to auto‑send partial batches.
concurrencynumber10Parallel HTTP requests.
retryRetryOption{…}Retry/backoff on network or 5xx errors.
authHttpAuth{ type: 'none' }HTTP authentication: basic, bearer, apiKey, oauth2, or none.
type HttpAuth =
| { type: 'none' }
| { type: 'basic'; username: string; password: string }
| { type: 'bearer'; token: string }
| { type: 'apiKey'; in: 'header' | 'query'; name: string; value: string }
| {
type: 'oauth2'
tokenUrl: string
clientId: string
clientSecret: string
scope?: string
retry?: RetryOption
}

You can extend Transport<Levels, YourOptions> to ship logs anywhere. Databases, message queues, third‑party SDKs, etc.

import {
Transport,
DEFAULT_LEVELS,
RetryOption,
TransportOptions,
} from 'logpot'
interface MyCustomTransportOptions<
Levels extends Record<string, number> = DEFAULT_LEVELS
> extends TransportOptions<Levels> {
/** Your custom settings… */
connectionString: string
retry?: RetryOption
}
import { Transport, TransportError } from 'logpot'
export class MyCustomTransport<
Levels extends Record<string, number> = DEFAULT_LEVELS
> extends Transport<Levels, MyCustomTransportOptions<Levels>> {
constructor(
levelDef: LevelDefinition<Levels>,
options: MyCustomTransportOptions<Levels>
) {
super(levelDef, options)
this.transportName = 'fileTransport'
if (!this.options.name) this.options.name = this.transportName
// e.g. initialize DB client or SDK
this.connect()
}
protected doLog(log: Log<Levels>): void {
// Called for each log entry.
// You can batch or send immediately:
this.sendToRemote(log).catch((err) => this.handleError({ err, log }))
}
protected flush(): void {
// Initiate flush and return.
}
protected async doFlushAndWait(): Promise<void> {
this.flush()
// Wait until flush is completed.
}
protected async doClose(): Promise<void> {
// Clean up resources (close connections).
await this.disconnect()
}
private async sendToRemote(log: Log<Levels>): Promise<void> {
// transform & serialize via `this.formatter.format([log])`
const payload = this.formatter.format([log])
// send via your client…
}
}

If you want to run your transport as a worker, you can create a worker for your transport:

import { MyCustomTransport } from './myCustomTransport'
import { Transport, LevelDefinition } from './logpot'
Transport.initWorker(
(options: MyCustomTransportOptions, levelDefinition: LevelDefinition) => {
const transport = new MyCustomTransport(levelDefinition, options)
return transport
}
)
// When user enables `runAsWorker: true`, LogPot spawns a Worker
// and invokes your `initWorker` callback to configure the worker-side logic.

When creating a logger:

import { createLogger, STD_LEVEL_DEF } from 'logpot'
import { MyCustomTransport } from './myCustomTransport'
await createLogger({
transport: new MyCustomTransport(STD_LEVEL_DEF, {
name: 'MyDB',
logLevel: STD_LEVEL_DEF.getLevelNumber('INFO'),
connectionString: 'postgres://…',
retry: { maxRetry: 3, baseDelay: 200 },
}),
})
  • Mix & Match: You can supply an array of transports.
  • Global Logger: setLogger(...) to override or getLogger() to retrieve.

  • Batching: Buffer logs for high‑throughput sinks; flush on interval or size.
  • Backpressure: Honor concurrency and retry options to avoid overload.
  • Error Handling: Provide onError callback or subclass handleError to surface failures.
  • Worker Threads: Offload heavy I/O or CPU‑bound formatting to avoid blocking your main application.
  • Context & Filtering: Use context, filter, and transformer to enrich, drop, or redact logs early.

Transports in LogPot provide a flexible, consistent API for routing logs. Whether to console, files, HTTP services, or entirely custom systems. By extending the abstract base, you benefit from built‑in formatting, error serialization, retry/backoff, and worker‑thread support, allowing you to focus on the unique delivery logic of your target sink.