Site icon 99opts

Exchange data between paragraphs of Spark and Flink interpreters with InterpreterContext within Apache Zeppelin

If you have to exchange data from Flink to Spark or Spark to Flink within Apache Zeppelin, you can use the InterpreterContext to store and reload data between the separated paragraphs.

You can load the InterpreterContext within Spark paragraph and store the relevant data within:

%spark

import org.apache.zeppelin.interpreter.InterpreterContext
 
val resourcePool = InterpreterContext.get().getResourcePool()
 
val n = z.select("name",Seq(("foo", "foo"), ("bar", "bar")))
 
resourcePool.put("name", n)

Within another paragraph that loads the Flink interpreter, you can load the InterpreterContext and use the stored information.

%flink
 
import org.apache.zeppelin.interpreter.InterpreterContext
 
val resourcePool = InterpreterContext.get().getResourcePool()
 
resourcePool.get("name").get.toString
Exit mobile version