hive2.3.8hadoop2.10.1兼容测试

作者:神秘网友 发布时间:2021-02-23 17:20:20

hive2.3.8hadoop2.10.1兼容测试

环境情况

namenode.test.local centos7.9 jdk1.8_221 4c8g200G
datanode1-3.test.local centos7.9jdk1.8_221 12c16g200G
hive.test.local centos7.9jdk1.8_221 mysql5.7.32 8c8g200G

配置文件

/etc/profile

...
export MAVEN_HOME=/opt/maven
export JAVA_HOME=/opt/jdk
export PATH=$PATH:$JAVA_HOME/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hive/bin:$MAVEN_HOME/bin

hadoop-env.sh

export JAVA_HOME=/opt/jdk
export HADOOP_HOME=/opt/hadoop

core-site.xml

configuration
    property
        namefs.defaultFS/name
        valuehdfs://namenode.test.local:8020/value
    /property
    property
        nameio.file.buffer.size/name
        value131072/value
    /property
    property
        namehadoop.proxyuser.root.groups/name
        value*/value
    /property
    property
        namehadoop.proxyuser.root.hosts/name
        value*/value
    /property
/configuration

hdfs-site.xml

configuration
  property
        namedfs.namenode.name.dir/name
        value/dfs/nn/value
    /property
    property
        namedfs.namenode.handler.count/name
        value100/value
    /property
    property
        namedfs.datanode.data.dir/name
        value/dfs/dn/value
    /property
    property
        namedfs.blocksize/name
        value134217728/value
    /property
/configuration

yarn-site.xml

configuration
!-- Site specific YARN configuration properties --
    property
        nameyarn.resourcemanager.hostname/name
        valuenamenode.test.local/value
    /property
    property
        nameyarn.nodemanager.aux-services/name
        valuemapreduce_shuffle/value
    /property
/configuration

mapred-site.xml

configuration
    property
        namemapreduce.framework.name/name
        valueyarn/value
    /property
/configuration

hive-env.sh

export JAVA_HOME=/opt/jdk
export HADOOP_HOME=/opt/hadoop
export HIVE_HOME=/opt/hive
export HADOOP_HEAPSIZE=2048

hive-site.xml

configuration
    property
        namejavax.jdo.option.ConnectionUserName/name
        valueroot/value
    /property
    property
        namejavax.jdo.option.ConnectionPassword/name
        valuehive@123/value
    /property
   property
        namejavax.jdo.option.ConnectionURL/name
        valuejdbc:mysql://hive.test.local:3306/hivecreateDatabaseIfNotExist=trueamp;useSSL=false/value
    /property
    property
      	namejavax.jdo.option.ConnectionDriverName/name
        valuecom.mysql.jdbc.Driver/value
    /property

    property
       namehive.metastore.uris/name
       valuethrift://hive.test.local:9083/value
    /property
    property
        namehive.metastore.warehouse.dir/name
        value/user/hive/warehouse/value
    /property
    property
        namehive.exec.scratchdir/name
        value/tmp/hive/value
    /property
 	  property
        namehive.exec.local.scratchdir/name
        value/tmp/hive/user/value
 	  /property
    property
        namehive.querylog.location/name
        value/tmp/hive/querylog/value
    /property
/configuration

启动服务

hadoop

[root@namenode ~]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [namenode.test.local]
namenode.test.local: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-namenode.test.local.out
datanode2.test.local: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-datanode2.test.local.out
datanode1.test.local: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-datanode1.test.local.out
datanode3.test.local: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-datanode3.test.local.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-namenode.test.local.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-namenode.test.local.out
datanode1.test.local: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-datanode1.test.local.out
datanode3.test.local: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-datanode3.test.local.out
datanode2.test.local: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-datanode2.test.local.out
[root@namenode ~]#

hive

# 启动hive metastore
[root@hive ~]# hive --service metastore
2021-02-22 16:12:38: Starting Hive Metastore Server

# 启动hiveserver2
[root@hive ~]# hiveserver2
which: no hbase in (/usr/local/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hbase/bin:/opt/hive/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hive/bin:/opt/maven/bin:/root/bin)
2021-02-22 16:59:11: Starting HiveServer2

测试工具

hive-testbench

hive-testbench
A testbench for experimenting with Apache Hive at any data scale.

Overview
The hive-testbench is a data generator and set of queries that lets you experiment with Apache Hive at scale. The testbench allows you to experience base Hive performance on large datasets, and gives an easy way to see the impact of Hive tuning parameters and advanced settings.

Prerequisites
You will need:

Hadoop 2.2 or later cluster or Sandbox.
Apache Hive.
Between 15 minutes and 2 days to generate data (depending on the Scale Factor you choose and available hardware).
If you plan to generate 1TB or more of data, using Apache Hive 13+ to generate the data is STRONGLY suggested.
Install and Setup
All of these steps should be carried out on your Hadoop cluster.

Step 1: Prepare your environment.

In addition to Hadoop and Hive, before you begin ensure gcc is installed and available on your system path. If you system does not have it, install it using yum or apt-get.

Step 2: Decide which test suite(s) you want to use.

hive-testbench comes with data generators and sample queries based on both the TPC-DS and TPC-H benchmarks. You can choose to use either or both of these benchmarks for experiementation. More information about these benchmarks can be found at the Transaction Processing Council homepage.

Step 3: Compile and package the appropriate data generator.

For TPC-DS, ./tpcds-build.sh downloads, compiles and packages the TPC-DS data generator. For TPC-H, ./tpch-build.sh downloads, compiles and packages the TPC-H data generator.

Step 4: Decide how much data you want to generate.

You need to decide on a "Scale Factor" which represents how much data you will generate. Scale Factor roughly translates to gigabytes, so a Scale Factor of 100 is about 100 gigabytes and one terabyte is Scale Factor 1000. Decide how much data you want and keep it in mind for the next step. If you have a cluster of 4-10 nodes or just want to experiment at a smaller scale, scale 1000 (1 TB) of data is a good starting point. If you have a large cluster, you may want to choose Scale 10000 (10 TB) or more. The notion of scale factor is similar between TPC-DS and TPC-H.

If you want to generate a large amount of data, you should use Hive 13 or later. Hive 13 introduced an optimization that allows far more scalable data partitioning. Hive 12 and lower will likely crash if you generate more than a few hundred GB of data and tuning around the problem is difficult. You can generate text or RCFile data in Hive 13 and use it in multiple versions of Hive.

Step 5: Generate and load the data.

The scripts tpcds-setup.sh and tpch-setup.sh generate and load data for TPC-DS and TPC-H, respectively. General usage is tpcds-setup.sh scale_factor [directory] or tpch-setup.sh scale_factor [directory]

Some examples:

Build 1 TB of TPC-DS data: ./tpcds-setup.sh 1000

Build 1 TB of TPC-H data: ./tpch-setup.sh 1000

Build 100 TB of TPC-DS data: ./tpcds-setup.sh 100000

Build 30 TB of text formatted TPC-DS data: FORMAT=textfile ./tpcds-setup 30000

Build 30 TB of RCFile formatted TPC-DS data: FORMAT=rcfile ./tpcds-setup 30000

Also check other parameters in setup scripts important one is BUCKET_DATA.

Step 6: Run queries.

More than 50 sample TPC-DS queries and all TPC-H queries are included for you to try. You can use hive, beeline or the SQL tool of your choice. The testbench also includes a set of suggested settings.

This example assumes you have generated 1 TB of TPC-DS data during Step 5:

 cd sample-queries-tpcds
 hive -i testbench.settings
 hive use tpcds_bin_partitioned_orc_1000;
 hive source query55.sql;
Note that the database is named based on the Data Scale chosen in step 3. At Data Scale 10000, your database will be named tpcds_bin_partitioned_orc_10000. At Data Scale 1000 it would be named tpch_flat_orc_1000. You can always show databases to get a list of available databases.

Similarly, if you generated 1 TB of TPC-H data during Step 5:

 cd sample-queries-tpch
 hive -i testbench.settings
 hive use tpch_flat_orc_1000;
 hive source tpch_query1.sql;

测试流程

数据生成

# 修改hiveserver2地址
[root@namenode hive-testbench-hdp3]# grep beeline tpcds-setup.sh
#HIVE="beeline -n hive -u 'jdbc:hive2://localhost:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2tez.queue.name=default' "
HIVE="beeline -n root -u 'jdbc:hive2://hive.test.local:10000' "

# 生成10G测试数据
[root@namenode hive-testbench-hdp3]# ./tpcds-setup.sh 10
ls: `/tmp/tpcds-generate/10': No such file or directory
Generating data at scale factor 10.
21/02/22 16:23:08 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout
21/02/22 16:23:08 INFO client.RMProxy: Connecting to ResourceManager at namenode.test.local/192.168.198.35:8032
21/02/22 16:23:08 INFO input.FileInputFormat: Total input files to process : 1
21/02/22 16:23:08 INFO mapreduce.JobSubmitter: number of splits:10
21/02/22 16:23:09 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
21/02/22 16:23:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1613981536344_0001
21/02/22 16:23:09 INFO conf.Configuration: resource-types.xml not found
21/02/22 16:23:09 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/02/22 16:23:09 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/02/22 16:23:09 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/02/22 16:23:09 INFO impl.YarnClientImpl: Submitted application application_1613981536344_0001
21/02/22 16:23:09 INFO mapreduce.Job: The url to track the job: http://namenode.test.local:8088/proxy/application_1613981536344_0001/
21/02/22 16:23:09 INFO mapreduce.Job: Running job: job_1613981536344_0001
21/02/22 16:23:16 INFO mapreduce.Job: Job job_1613981536344_0001 running in uber mode : false
21/02/22 16:23:16 INFO mapreduce.Job:  map 0% reduce 0%
21/02/22 16:25:30 INFO mapreduce.Job:  map 10% reduce 0%
21/02/22 16:25:34 INFO mapreduce.Job:  map 20% reduce 0%
21/02/22 16:25:50 INFO mapreduce.Job:  map 30% reduce 0%
21/02/22 16:25:57 INFO mapreduce.Job:  map 40% reduce 0%
21/02/22 16:26:01 INFO mapreduce.Job:  map 60% reduce 0%
21/02/22 16:26:02 INFO mapreduce.Job:  map 70% reduce 0%
21/02/22 16:26:04 INFO mapreduce.Job:  map 90% reduce 0%
21/02/22 16:29:28 INFO mapreduce.Job:  map 100% reduce 0%
21/02/22 16:29:28 INFO mapreduce.Job: Job job_1613981536344_0001 completed successfully
21/02/22 16:29:28 INFO mapreduce.Job: Counters: 31
	File System Counters
		FILE: Number of bytes read=0
		FILE: Number of bytes written=2104820
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=4519
		HDFS: Number of bytes written=12194709083
		HDFS: Number of read operations=50
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=89
	Job Counters
		Killed map tasks=3
		Launched map tasks=13
		Other local map tasks=13
		Total time spent by all maps in occupied slots (ms)=1967554
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=1967554
		Total vcore-milliseconds taken by all map tasks=1967554
		Total megabyte-milliseconds taken by all map tasks=2014775296
	Map-Reduce Framework
		Map input records=10
		Map output records=0
		Input split bytes=1200
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=22683
		CPU time spent (ms)=496160
		Physical memory (bytes) snapshot=2480996352
		Virtual memory (bytes) snapshot=21693628416
		Total committed heap usage (bytes)=1499987968
	File Input Format Counters
		Bytes Read=3319
	File Output Format Counters
		Bytes Written=0
TPC-DS text data generation complete.

... ...

Optimizing table catalog_page (16/24).
Optimizing table web_site (17/24).
Optimizing table store_sales (18/24).
Optimizing table store_returns (19/24).
Optimizing table web_sales (20/24).
Optimizing table web_returns (21/24).
Optimizing table catalog_sales (22/24).
Optimizing table catalog_returns (23/24).
Optimizing table inventory (24/24).
Loading constraints
0: jdbc:hive2://hive.test.local:10000 -- set hivevar:DB=tpcds_bin_partitioned_orc_10000
0: jdbc:hive2://hive.test.local:10000
0: jdbc:hive2://hive.test.local:10000 alter table customer_address add constraint ${DB}_pk_ca primary key (ca_address_sk) disable novalidate rely;
Data loaded into database tpcds_bin_partitioned_orc_10.


# 查看数据大小
[root@namenode hive-testbench-hdp3]# hadoop fs -du -h /tmp
42.8 M  /tmp/hadoop-yarn
0       /tmp/hive
11.4 G  /tmp/tpcds-generate

查询测试

[root@namenode hive-testbench-hdp3]# cd sample-queries-tpcds/
[root@namenode sample-queries-tpcds]# hive
which: no hbase in (/usr/local/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hbase/bin:/opt/hive/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hive/bin:/opt/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/hive/bin:/opt/maven/bin)

Logging initialized using configuration in file:/opt/hive/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive show databases;
OK
default
tpcds_bin_partitioned_orc_10
tpcds_text_10
Time taken: 0.984 seconds, Fetched: 3 row(s)
hive use tpcds_bin_partitioned_orc_10;
OK
Time taken: 0.028 seconds
hive source query55.sql;
No Stats for tpcds_bin_partitioned_orc_10@date_dim, Columns: d_moy, d_date_sk, d_year
No Stats for tpcds_bin_partitioned_orc_10@store_sales, Columns: ss_ext_sales_price, ss_item_sk
No Stats for tpcds_bin_partitioned_orc_10@item, Columns: i_manager_id, i_brand, i_brand_id, i_item_sk
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20210222173950_896147ed-09f8-45dc-98be-01607866a9b8
Total jobs = 2
2021-02-22 17:40:02	Starting to launch local task to process map join;	maximum memory = 477626368
2021-02-22 17:40:04	Dump the side-table for tag: 1 with group count: 1819 into file: file:/tmp/hive/tmp/user/2caa1bb0-be01-485c-a16d-0b416e35dec9/hive_2021-02-22_17-39-50_274_1841607821466582182-1/-local-10007/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2021-02-22 17:40:04	Uploaded 1 File to: file:/tmp/hive/tmp/user/2caa1bb0-be01-485c-a16d-0b416e35dec9/hive_2021-02-22_17-39-50_274_1841607821466582182-1/-local-10007/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (139826 bytes)
2021-02-22 17:40:04	Dump the side-table for tag: 0 with group count: 30 into file: file:/tmp/hive/tmp/user/2caa1bb0-be01-485c-a16d-0b416e35dec9/hive_2021-02-22_17-39-50_274_1841607821466582182-1/-local-10007/HashTable-Stage-3/MapJoin-mapfile10--.hashtable
2021-02-22 17:40:04	Uploaded 1 File to: file:/tmp/hive/tmp/user/2caa1bb0-be01-485c-a16d-0b416e35dec9/hive_2021-02-22_17-39-50_274_1841607821466582182-1/-local-10007/HashTable-Stage-3/MapJoin-mapfile10--.hashtable (893 bytes)
2021-02-22 17:40:04	End of local task; Time Taken: 1.829 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 5
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapreduce.job.reduces=number
Starting Job = job_1613981536344_0032, Tracking URL = http://namenode.test.local:8088/proxy/application_1613981536344_0032/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1613981536344_0032
Hadoop job information for Stage-3: number of mappers: 5; number of reducers: 5
2021-02-22 17:40:15,479 Stage-3 map = 0%,  reduce = 0%
2021-02-22 17:40:24,777 Stage-3 map = 20%,  reduce = 0%, Cumulative CPU 12.25 sec
2021-02-22 17:40:32,004 Stage-3 map = 42%,  reduce = 0%, Cumulative CPU 133.69 sec
2021-02-22 17:40:38,170 Stage-3 map = 56%,  reduce = 0%, Cumulative CPU 193.37 sec
2021-02-22 17:40:41,263 Stage-3 map = 56%,  reduce = 1%, Cumulative CPU 193.96 sec
2021-02-22 17:40:42,288 Stage-3 map = 56%,  reduce = 4%, Cumulative CPU 195.27 sec
2021-02-22 17:40:43,317 Stage-3 map = 81%,  reduce = 7%, Cumulative CPU 232.46 sec
2021-02-22 17:40:44,340 Stage-3 map = 92%,  reduce = 7%, Cumulative CPU 246.32 sec
2021-02-22 17:40:46,382 Stage-3 map = 100%,  reduce = 7%, Cumulative CPU 252.23 sec
2021-02-22 17:40:47,413 Stage-3 map = 100%,  reduce = 37%, Cumulative CPU 256.92 sec
2021-02-22 17:40:48,433 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 266.17 sec
MapReduce Total cumulative CPU time: 4 minutes 26 seconds 170 msec
Ended Job = job_1613981536344_0032
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapreduce.job.reduces=number
Starting Job = job_1613981536344_0033, Tracking URL = http://namenode.test.local:8088/proxy/application_1613981536344_0033/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1613981536344_0033
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 1
2021-02-22 17:40:58,959 Stage-4 map = 0%,  reduce = 0%
2021-02-22 17:41:04,074 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 2.24 sec
2021-02-22 17:41:10,210 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 5.1 sec
MapReduce Total cumulative CPU time: 5 seconds 100 msec
Ended Job = job_1613981536344_0033
MapReduce Jobs Launched:
Stage-Stage-3: Map: 5  Reduce: 5   Cumulative CPU: 266.17 sec   HDFS Read: 181322100 HDFS Write: 30670 SUCCESS
Stage-Stage-4: Map: 1  Reduce: 1   Cumulative CPU: 5.1 sec   HDFS Read: 38479 HDFS Write: 8225 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 31 seconds 270 msec
OK
5004001	edu packscholar #1                                	985090.91
4003001	exportiedu pack #1                                	749796.53
2003001	exportiimporto #1                                 	740017.41
3003001	exportiexporti #1                                 	715635.78
3001001	amalgexporti #1                                   	648516.60
4004001	edu packedu pack #1                               	647306.31
1001001	amalgamalg #1                                     	615682.83
5001001	amalgscholar #1                                   	608734.34
2004001	edu packimporto #1                                	602920.44
4001001	amalgedu pack #1                                  	593830.28
1004001	edu packamalg #1                                  	538690.18
2001001	amalgimporto #1                                   	509721.01
5003001	exportischolar #1                                 	509119.63
1002001	importoamalg #1                                   	432381.01
2002001	importoimporto #1                                 	430232.14
3004001	edu packexporti #1                                	418713.13
1003001	exportiamalg #1                                   	326178.15
1004002	edu packamalg #2                                  	318065.66
5003002	exportischolar #2                                 	317527.51
3002001	importoexporti #1                                 	294773.87
4003002	exportiedu pack #2                                	288009.92
5001002	amalgscholar #2                                   	276310.99
1003002	exportiamalg #2                                   	276127.56
4002001	importoedu pack #1                                	262465.67
2004002	edu packimporto #2                                	259237.10
5002001	importoscholar #1                                 	246532.02
1001002	amalgamalg #2                                     	238708.14
5002002	importoscholar #2                                 	237393.25
4001002	amalgedu pack #2                                  	216028.56
4002002	importoedu pack #2                                	212750.27
1002002	importoamalg #2                                   	207273.90
2002002	importoimporto #2                                 	196153.44
4004002	edu packedu pack #2                               	196150.02
2003002	exportiimporto #2                                 	171982.44
2001002	amalgimporto #2                                   	165245.23
8006009	corpnameless #9                                   	149362.97
3002002	importoexporti #2                                 	145204.09
7015003	scholarnameless #3                                	133873.18
7013009	exportinameless #9                                	130139.79
9015003	scholarunivamalg #3                               	128759.80
7003008	exportibrand #8                                   	124947.61
3001002	amalgexporti #2                                   	117106.72
8005003	scholarnameless #3                                	109752.72
6012003	importobrand #3                                   	109332.83
7004006	edu packbrand #6                                  	105481.30
6003003	exporticorp #3                                    	104250.25
9010005	univunivamalg #5                                  	103386.73
7007010	brandbrand #10                                    	102883.40
8009001	maxinameless #1                                   	101379.46
9003002	exportimaxi #2                                    	99345.17
8015001	scholarmaxi #1                                    	97796.94
9014002	edu packunivamalg #2                              	96701.89
3003002	exportiexporti #2                                 	93401.96
8009008	maxinameless #8                                   	91274.89
6005007	scholarcorp #7                                    	91083.40
9016003	corpunivamalg #3                                  	90934.97
9005009	scholarmaxi #9                                    	90709.21
5004002	edu packscholar #2                                	90128.22
8007003	brandnameless #3                                  	89640.84
10009013	maxiunivamalg #13                                 	88603.18
10001014	amalgunivamalg #14                                	83967.58
10014001	edu packamalgamalg #1                             	83264.25
6004007	edu packcorp #7                                   	82821.56
8014004	edu packmaxi #4                                   	82562.89
6003005	exporticorp #5                                    	82151.42
8004007	edu packnameless #7                               	81992.68
8013003	exportimaxi #3                                    	81738.37
8005005	scholarnameless #5                                	78781.89
10009011	maxiunivamalg #11                                 	78542.32
6010003	univbrand #3                                      	77660.72
9004002	edu packmaxi #2                                   	77422.65
6011001	amalgbrand #1                                     	75666.07
10012001	importoamalgamalg #1                              	75611.21
8006006	corpnameless #6                                   	73473.83
10007014	brandunivamalg #14                                	73376.85
8001006	amalgnameless #6                                  	73374.93
9012002	importounivamalg #2                               	73261.59
6005001	scholarcorp #1                                    	73175.36
9012008	importounivamalg #8                               	72132.85
9007003	brandmaxi #3                                      	71743.79
8010005	univmaxi #5                                       	71661.03
7001001	amalgbrand #1                                     	71610.31
6005002	scholarcorp #2                                    	70764.54
9012005	importounivamalg #5                               	70397.87
9001009	amalgmaxi #9                                      	70248.11
7002006	importobrand #6                                   	70169.63
7016010	corpnameless #10                                  	69625.40
7014004	edu packnameless #4                               	69423.25
6008003	namelesscorp #3                                   	68945.08
8004001	edu packnameless #1                               	68807.75
9016009	corpunivamalg #9                                  	68247.42
7010001	univnameless #1                                   	68041.95
6008006	namelesscorp #6                                   	67879.37
6007006	brandcorp #6                                      	67440.24
10014016	edu packamalgamalg #16                            	67194.80
7001003	amalgbrand #3                                     	66805.69
7001002	amalgbrand #2                                     	66001.82
10011008	amalgamalgamalg #8                                	65685.86
8004003	edu packnameless #3                               	65608.69
10011011	amalgamalgamalg #11                               	65344.63
Time taken: 80.995 seconds, Fetched: 100 row(s)
hive

总结

基本上没出现过报错的情况,后续将测试spark作为hive计算引擎。

hive2.3.8hadoop2.10.1兼容测试 相关文章

  1. 安卓自动化测试--Momkey 自定义脚本实现自动化

    MonkeyScript MS 是官方提供的,除了像猴子一样随机乱点之外,还可以通过编写脚本的形式,完成一系列固定的操作。MS 提供一整套完善的 API 来进行支持,主要还是基于坐标点的操作,包含常用的:点击、长按、输入、等待等操作。 脚本用法 1、LaunchActivity(p

  2. 【转】性能测试常见瓶颈分析及调优方法

    转自https://www.cnblogs.com/imyalost/p/10850811.html 性能测试常见瓶颈分析及调优方法 在性能测试过程中,最重要的一部分就是性能瓶颈定位与调优。而引发性能瓶颈的原因是多种多样的,在之前的博客:常见的性能测试缺陷有进行介绍。 这篇博客,来聊聊性能

  3. 引入vue.js的html如何做到兼容pc和移动端

    mounted: function () { let _this = this ; var str = window.navigator.userAgent; // 判断设备 console.log(str,'---str') if(str.toLowerCase().indexOf("mobile") == -1){ $(document).ready(function () { _this.setSize() ; }) // 窗口变化时 $(windo

  4. .Net Core内存回收模式及性能测试对比

    .NET Core 两种GC模式: Server GC / Workstation GC Server GC : 主要应用于多处理器系统,并且作为ASP.NET Core宿主的默认配置。它会为每个处理器都创建一个GC Heap,并且会并行执行回收操作。该模式的GC可以最大化吞吐量和较好的收缩性。这种模式的特点是初

  5. 接口自动化测试--requests库

    Requests库基本使用 requests是python实现的最简单易用的HTTP库,建议爬虫使用request import requests url = "https://api.github.com/events" 获取某个网页 import requests r = requests.get("https://api.github.com/events") print(r) # Response [200]

  6. jmeter实现api自动化测试

    1 介绍 2 环境安装 3 准备工作 4 环境配置 5 web页面显示测试报告 1. 介绍 文章介绍了通过jmeter事先录制api脚本,然后使用ant进行编译,最后生成html的接口测试报告,并且通过web展示。 这种测试主要应用于持续集成环境中,在运维部署代码完成之后,迅速对

  7. 每日打包之自动化安装(打通app自动化测试)

    1 前言 2 实现介绍 3 环境搭建 3.1 电脑环境搭建: 3.2 手机环境配置 4 我的安装包获取方式 5 手动安装apk应用 6 Python 自动安装apk应用 1. 前言 我们前段时间实现了安卓和ios的自动打包,可以每天把最新的代码制作成安装包,放到下载目录,让产品,测试等

  8. Sysbench 测试磁盘性能

    简介 硬盘作为电脑的一个主要部件,不仅肩负着记录一切数据信息的重要职责,而且它的性能对整个系统的性能有直接影响。由于磁存储介质的发展较一日千里的半导体技术落后,所以硬盘在整机系统中很多时候充当着一个? 瓶颈 ? 的角色。如果把一台整机比喻成一只

  9. mysql 设置double列累加

    最近做工厂项目,测试提的一个bug在本地测了好久一直没复现;直接连测试线的数据库,又经过一系列流程模拟最终在本地复现了这个问题;由于有消息日志的定时输出,只能打点介入来追踪bug,最后发现问题出在对double列累加的SQL语句上,真是一顿好找。 百度‘m

  10. linux7.4安装webbench测试工具-nginx并发测试

    linux7.4安装webbench测试工具 一.安装webbench 1.如果没有安装GCC就先执行 #yum -y install gcc automake autoconf libtool make 2.安装ctags及创建一个文件夹 #yum install ctags #mkdir -m 644 -p /usr/local/man/man1 3.获取及安装webbench #wget http:/

每天更新java,php,javaScript,go,python,nodejs,vue,android,mysql等相关技术教程,教程由网友分享而来,欢迎大家分享IT技术教程到本站,帮助自己同时也帮助他人!

Copyright 2020, All Rights Reserved. Powered by 跳墙网(www.tqwba.com)|网站地图|关键词