当前位置: 首页 > 科技观察

Hive支持的文件格式和压缩算法

时间:2023-03-11 20:24:17 科技观察

概述只要配置正确的文件类型和压缩类型(如Textfile+Gzip,SequenceFile+Snappy等),Hive就可以按预期方式读取和解析数据,并提供SQL函数。SequenceFile本身的结构被设计成压缩内容。所以对于SequenceFile文件的压缩,并不是先生成SequenceFile文件,再进行压缩。相反,当生成SequenceFile时,其中的内容字段会被压缩。最后压缩后,对外还是体现为一个SequenceFile。RCFile、ORCFile、Parquet和Avro处理压缩的方式与SequenceFile相同。FileformatTextfileSequenceFileRCFileORCFileCodecforParquetAvrocompressionalgorithmTEXTFILE--Createatableintheformatofatextfile:CREATEEXTERNALTABLEstudent_text(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--Importdatatothistable中,将启动MR任务INSERTOVERWRITETABLEstudent_textSELECT*FROMstudent;可查看到生成的数据文件的格式为非压缩的文本文件:hdfsdfs-cat/user/hive/warehouse/student_text/000000_01001810081,cheyo1001810082,pku1001810083,rocky1001810084,stephen2002820081,sql2002820082,hello2002820083,hijj3001810081,hhhhhhh3001810082,abbbbbbtextfile,DEFLATEcompression--createatable,theformatisfilefile:CREATETABLEstudent_text_def(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERIP.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec;--Importdata:INSERTOVERWRITETABLEStudent_text_defSELECT*FROMstudent;--ViewdataSELECT*FROMstudent_text_def;Checkthedatafile,youcanseethatthedatafileismultiple.deflatefiles文件hdfsdfs-ls/user/hive/warehouse/student_text_def/-rw-r--r--2015-09-1612:48/user/hive/warehouse/student_text_def/000000_0.deflate-rw-r--r--2015-09-1612:48/user/hive/warehouse/student_text_def/000001_0.deflate-rw-r--r--2015-09-1612:48/user/hive/warehouse/student_text_def/000002_0.deflate文本文件,Gzip压缩--创建文件格式为file的表:CREATETABLEstudent_text_gzip(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--设置压缩类型为Gzip压缩SEthive.exec.compress.output=true;设置映射。output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;--导入数据:INSERTOVERWRITETABLEstudent_text_gzipSELECT*FROMstudent;--查看数据SELECT*FROMstudent_text_gzip;查看数据文件,可以看到数据文件是多个.gz文件。解压.gz文件,可以看到明文:hdfsdfs-ls/user/hive/warehouse/student_text_gzip/-rw-r--r--2015-09-1510:03/user/hive/warehouse/student_text_gzip/000000_0.gz-rw-r--r--2015-09-1510:03/user/hive/warehouse/student_text_gzip/000001_0.gz-rw-r--r--2015-09-1510:03/user/hive/warehouse/student_text_gzip/000002_0.gz文本文件,Bzip2压缩--创建表,格式为filefile:CREATETABLEstudent_text_bzip2(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--设置压缩类型为Bzip2压缩:SEThive.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec;--导入数据INSERTOVERWRITETABLEstudent_text_bzip2SELECT*FROMstudent;---查看数据:SELECT*FROMstudent_text_bzip2;查看数据文件,可以看到数据文件是多个.bz2文件。解压.bz2文件,可以看到明文:hdfsdfs-ls/user/hive/warehouse/student_text_bzip2-rw-r--r--2015-09-1510:09/user/hive/warehouse/student_text_bzip2/000000_0.bz2-rw-r--r--2015-09-1510:09/user/hive/warehouse/student_text_bzip2/000001_0.bz2-rw-r--r--2015-09-1510:09/user/hive/warehouse/student_text_bzip2/000002_0.bz2文本文件,lzo压缩--创建表CREATETABLEStudent_text_lzo(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--设置为LZO压缩SEthive.exec.compress;SETmapred=true.output.compress=true;SETmapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec;--导入数据INSERTOVERWRITETABLEstudent_text_lzoSELECT*FROMstudent;--查询数据SELECT*FROMstudent_text_lzo;查看数据文件,可以看到数据文件Compressformultiple.lzo。解压.lzo文件,您可以看到纯文本。未经测试,需要安装lzop库文本文件,lz4压缩--createtableCREATETABLEStudent_text_lz4(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--设置为LZ4压缩SEthive.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.Lz4Codec;--导入数据INSERTOVERWRITETABLEstudent_text_lz4SELECT*FROMstudent;查看数据文件,可以看到数据文件是多重.lz4压缩的。使用cat查看.lz4文件,可以看到压缩后的文字。hdfsdfs-ls/user/hive/warehouse/student_text_lz4-rw-r--r--2015-09-1612:06/user/hive/warehouse/student_text_lz4/000000_0.lz4-rw-r--r--2015-09-1612:06/user/hive/warehouse/student_text_lz4/000001_0.lz4-rw-r--r--2015-09-1612:06/user/hive/warehouse/student_text_lz4/000002_0.lz4文本文件,Snappy压缩--创建表CREATETABLEstudent_text_snappy(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASTEXTFILE;--设置压缩SEThive.exec.compress.output=true;SETmapred.compress.map.output=true;SETmapred.output.compress=true;SETmapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;SETio.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;--导入数据INSERTOVERWRITETABLEStudent_text_snappySELECT*FROMstudent;--查询数据SELECT*FROMstudent_text_snappy;查看数据文件,可以看到数据文件是多个.snappy压缩文件。使用cat查看.snappy文件,可以看到压缩后的文本:hdfsdfs-ls/user/hive/warehouse/student_text_snappyFound3items-rw-r--r--2015-09-1516:42/user/hive/warehouse/student_text_snappy/000000_0.snappy-rw-r--r--2015-09-1516:42/user/hive/warehouse/student_text_snappy/000001_0.snappy-rw-r--r--2015-09-1516:42/user/hive/warehouse/student_text_snappy/000002_0.snappySEQUENCEFILE序列文件,DEFLATE压缩--创建表,格式为文件文件:CREATETABLEStudent_seq_def(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERFINITEDBY'\n'STOREDASSEQUE类型设置为压缩--UCompressionSEThive.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec;--导入数据:INSERTOVERWRITETABLEstudent_seq_defSELECT*FROMstudent;--查看数据SELECT*FROMstudent_seq_def;查看数据文件,这是一个密文文件。Gzip压缩--创建文件格式的表:CREATETABLEStudent_seq_gzip(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASSEQUENCEFILE;--设置压缩类型为Gzip压缩SEthive.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache。hadoop.io.compress.GzipCodec;--导入数据:INSERTOVERWRITETABLEstudent_seq_gzipSELECT*FROMstudent;--查看数据SELECT*FROMstudent_seq_gzip;查看数据文件,属于密文文件,gzip无法解压:hdfsdfs-ls/user/hive/warehouse/student_seq_gzip/-rw-r--r--/user/hive/warehouse/student_seq_gzip/000000_0RCFILERCFILE,gzip压缩CREATETABLEstudent_rcfile_gzip(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDAS.exec.compress.output=true;SETmapred.output.compress=true;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;--导入数据:INSERTOVERWRITETABLEstudent_rcfile_gzipSELECTid,nameFROMstudent;--查看数据SELECT*FROMstudent_rcfile_gzip;ORCFileORCFile自带参数设置压缩格式,一般不用上面的Hive参数设置压缩参数ORCFile,ZLIB压缩--创建表CREATETABLEstudent_orcfile_zlib(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASORCFILETBLPROPERTIES("orc.compress"="ZLIB");fromstudent_orcfile_zlib;orcfile,snappy压缩-createTableStudent_orcfile_snappy2(idstring,namestring)rowformitdelatdeLimitedfieldstermatisterminationby','lineSterminedby'选择*FROMstudent_orcfile_snappy2;一般不要使用以下方法。按照下面的方式压缩后,结果和上面同类型的压缩(SNAPPY)是不一样的。具体原因还有待进一步研究。--创建表CREATETABLEstudent_orcfile_snappy(idSTRING,nameSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','LINESTERMINATEDBY'\n'STOREDASORCFILE;--设置压缩SEThive.exec.compress.output=true;SETmapred.compress.map.output=true;SETmapred.output.compress=true;SETmapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;SETio.compression.codecs=org.apache。--设置压缩SEThive.exec.compress.output=true;SETmapred.compress.map.output=true;SETmapred.output.compress=true;SETmapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;SETmapred.output.compression.codec=org.apache.hadoop.io。compress.SnappyCodec;SETio.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;--导入数据INSERTOVERWRITETABLEstudent_parquet_snappySELECTid,nameFROMstudent;--查询数据SELECT*FROMstudent_parquet_snappy;AvroAvro,SnappySTRidning压缩--创建表CREATE_TAATEROSTEDFORMYDSFISTRING)','LINESTERMINATEDBY'\n'STOREDASAVRO;--设置压缩SEThive.exec.compress.output=true;SETmapred.compress.map.output=true;SETmapred.output.compress=true;SETmapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;SETmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;SETio.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;--导入数据INSERTOVERWRITETABLEstudent_avro_snappySELECTid,nameFROMstudent;--查询数据SELECT*FROMstudent_avro_snappy;