Go to the source code of this file.
◆ OFFSET
◆ FLAGS
◆ LOAD_COMMON
Value:
\
const int slice_start = (process_h * jobnr ) / nb_jobs;\
const
int slice_end = (process_h * (jobnr+1)) / nb_jobs;\
int x, y;\
const
int step =
s->step;\
\
Definition at line 123 of file vf_colorlevels.c.
◆ AVFILTER_DEFINE_CLASS()
AVFILTER_DEFINE_CLASS |
( |
colorlevels |
| ) |
|
◆ query_formats()
◆ config_input()
◆ colorlevel_slice_8()
◆ colorlevel_slice_16()
◆ filter_frame()
◆ colorlevels_options
◆ colorlevels_inputs
◆ colorlevels_outputs
◆ ff_vf_colorlevels
Initial value:= {
.name = "colorlevels",
.priv_class = &colorlevels_class,
}
Definition at line 319 of file vf_colorlevels.c.
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
static const AVFilterPad colorlevels_inputs[]
trying all byte sequences megabyte in length and selecting the best looking sequence will yield cases to try But a word about which is also called distortion Distortion can be quantified by almost any quality measurement one chooses the sum of squared differences is used but more complex methods that consider psychovisual effects can be used as well It makes no difference in this discussion First step
static int config_input(AVFilterLink *inlink)
static int slice_end(AVCodecContext *avctx, AVFrame *pict)
Handle slice ends.
static const AVFilterPad outputs[]
static int query_formats(AVFilterContext *ctx)
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several inputs
#define NULL_IF_CONFIG_SMALL(x)
Return NULL if CONFIG_SMALL is true, otherwise the argument without modification.
int ff_filter_process_command(AVFilterContext *ctx, const char *cmd, const char *arg, char *res, int res_len, int flags)
Generic processing of user supplied commands that are set in the same way as the filter options.
it s the only field you need to keep assuming you have a context There is some magic you don t need to care about around this just let it vf offset
#define AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC
Some filters support a generic "enable" expression option that can be used to enable or disable a fil...
static int process_command(AVFilterContext *ctx, const char *cmd, const char *args, char *res, int res_len, int flags)
static const AVFilterPad colorlevels_outputs[]
#define AVFILTER_FLAG_SLICE_THREADS
The filter supports multithreading by splitting frames into multiple parts and processing them concur...
#define flags(name, subs,...)
static const double coeff[2][5]