Map-reduce jobs are supported by WMArchive service.
Please note there is no yet web UI interface for MR jobs
Users require to write two functions: mapper and reducer for their task, see example below. To run MR jobs someone will need to get account on analytix.cern.ch node, login over there, setup WMArchive environment and run mrjob script.
mrjob --hdir=hdfs://host:port/path/data
      --odir=hdfs://host:port/path/out
      --schema=hdfs://host:port/path/schema.avsc
      --mrpy=mr.py --pydoop=/path/pydoop.tgz --avro=/path/avro.tgz
Example of mapper/reducer functions
def mapper(ctx):
    "Read given context and yield key (job-id) and values (task)"
    rec = ctx.value
    jid = rec["jobid"]
    if jid is not None:
        ctx.emit(jid, rec["fwjr"]["task"])
def reducer(ctx):
    "Emit empty key and some data structure via given context"
    ctx.emit("", {"jobid": ctx.key, "task": ctx.values})