Skip to main content

Features

Furiko specializes in time-based scheduling of scheduled and adhoc jobs. The following is a non-exhaustive list of features and enhancements offered by Furiko.

Feature List

Timezone-aware Cron Scheduling

Furiko allows users to specify a cron schedule to automatically schedule jobs, supporting up to a per-second granularity.

Furiko also offers native support for scheduling jobs on a cron schedule with timezones. This allows users to specify their cron schedule in a timezone that is familiar to them, or one that is standardized in their application. If not specified, Furiko also supports specifying a cluster-wide default timezone for all cron schedules.

Cluster-wide Load Balancing

Furiko offers a unique, extended cron syntax that may drastically improve the performance of running distributed cron at scale. For example, specifying H/5 * * * * means to run every 5 minutes, on a "random" minute/second within the 5 minute range.

This helps to avoid thundering herd effects in clusters when thousands of other jobs also start at the exact same time, evenly spreading out job executions which reduces waiting duration and impact on other downstream dependencies.

Strong Concurrency Handling

Furiko introduces stronger handling of multiple concurrent jobs. Using the Forbid concurrency policy, Furiko takes a very strict approach to ensuring that multiple jobs will never be started at the same time, free of race conditions.

In addition, adhoc jobs executed will be subject to the same concurrency policy as if it were scheduled automatically, which may help prevent untimely incidents.

Scheduling Future Executions

Furiko allows ad-hoc jobs to be queued for execution at a later time, or to start a new job execution immediately once the current job execution is finished.

Preventing Missed and Double Executions

Furiko provides strong guarantees to schedule jobs even in the face of failure, and is able to prevent both double runs and missed runs in many cases.

Furiko prevents double scheduled runs by using deterministic name formats. It also prevents missed runs using back-scheduling to tolerate short downtime, allowing the cluster administrator to safely restart or upgrade Furiko at any time.

Enhanced Timeout Handling

Furiko offers additional timeouts that can be used during a job execution. For example, pending timeouts help to gracefully handle node outages instead of hanging on a single execution, ensuring forward progress for job configs that are automatically scheduled.

Job Options

Furiko allows you to use JobConfigs as templates for Jobs, by passing parameters into the Job. The JobConfig can be parameterized with Job Options, which defines a structured input that will be validated and substituted at runtime.

Running multiple parallel tasks per Job

Furiko supports parallelism of tasks within a Job, which supports index number-based, index key-based or matrix-based expansion of tasks that will be run in parallel. A completion strategy can also be defined that will automatically terminate all parallel tasks when certain conditions are met.

Federation Across Multiple Clusters

Furiko also provides add-on support for federating, cordoning and draining of JobConfigs across multiple Kubernetes clusters safely.

info

This feature is currently planned in the Roadmap.

Monitoring, Notifications and Telemetry

Furiko also provides add-on support for monitoring and notifications of job executions and failures, via a variety of methods including Prometheus metrics, Slack webhooks and email notifications. In addition, Furiko provides alternative API servers to query, store and analyze large amounts of job executions per day, reducing the stress on kube-apiserver.

info

This feature is currently planned in the Roadmap.

Comparison with batch/v1

The following is a short comparison between Furiko and batch/v1 Jobs and CronJobs:

FeatureFurikobatch/v1
Scheduling
Cron expressions
Cron timezone
(not officially supported)
Scheduling constraints
Cron load balancing
Forbid concurrent with adhoc execution
Back-scheduling
(via startingDeadlineSeconds)
Enqueue jobs for later
Task Execution
Retries using separate Pods
Pending timeouts for dead nodes
Multiple parallel Pods per Job
Parallel expansion by list and matrix
Custom task executors
(planned)
General
Automatic cleanup with TTL
Parameterization of job inputs