当前位置: 首页 > 后端技术 > Java

让ApacheBeam运行在GCPCloudDataflow上

时间:2023-04-01 15:34:53 Java

介绍《Apache Beam入门及Java SDK开发初体验》一文中,大致讲了ApacheBeam的简单概念和本地运行。本文将介绍如何在GCPCloudDataflow上运行代码。在本地运行maven命令创建项目:mvnarchetype:generate\-DarchetypeGroupId=org.apache.beam\-DarchetypeArtifactId=beam-sdks-java-maven-archetypes-examples\-DarchetypeVersion=2.37.0\-DgroupId=org.example\-DartifactId=word-count-beam\-Dversion="0.1"\-Dpackage=org.apache.beam.examples\-DinteractiveMode=false上面会创建一个目录word-count-beam,这是一个示例工程。它可以通过一些简单的修改来使用。先构建一次,确保依赖下载成功:$mvncleanpackage通过IDEA在本地运行,添加如下参数:--output=pkslow-beam-counts--inputFile=/Users/larry/IdeaProjects/pkslow-samples/自述文件。md处理后的文件为README.md,输出前缀为pkslow-beam-counts:或者通过命令行运行:mvncompileexec:java\-Dexec.mainClass=org.apache.beam.examples.WordCount\-Dexec.args="--output=pkslow-beam-counts--inputFile=/Users/larry/IdeaProjects/pkslow-samples/README.md"要在GCPCloudDataflow上运行,准备环境必须有对应的Service帐户和密钥。当然,你必须有权限;您必须开启相应的服务;创建对应的Bucket,上传待处理的文件。在本地运行并执行命令如下:$mvncompileexec:java-Dexec.mainClass=org.apache.beam.examples.WordCount\-Dexec.args="--runner=DataflowRunner--gcpTempLocation=gs://pkslow-数据流/临时\--project=pkslow--region=us-east1\--inputFile=gs://pkslow-dataflow/input/README.md--output=gs://pkslow-dataflow//pkslow-counts"\-pdataflow-runner日志比较长,大概是把相关的jar包上传到temp目录下,因为执行的时候需要引用。例如:2022年11月3日8:41:48PMorg.apache.beam.runners.dataflow.util.PackageUtiltryStagePackageINFO:上传/Users/larry/.m2/repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar到gs://pkslow-dataflow/temp/staging/commons-compress-1.8.1-X8oTZQP4bsxsth-9F7E31Z5WtFx6VJTmuP08q9Rpf70.jarNov03,20228:41:48PMorg.apache。beam.runners.dataflow.util.PackageUtiltryStagePackageINFO:上传/Users/larry/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar到gs://pkslow-dataflow/temp/staging/jackson-mapper-asl-1.9.13-dOegenby7breKTEqWi68z6AZEovAIezjhW12GX6b4MI.jar查看Bucket,确实有一堆jar包:然后会创建dataflowjobs并开始工作。界面可以查看的Jobs如下:点击查看流程和更多详情:最后到Bucket中查看结果就出来了:code代码请看github:https://github.com/LarryDpk/p...