• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java TableScanOperator类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.ql.exec.TableScanOperator的典型用法代码示例。如果您正苦于以下问题:Java TableScanOperator类的具体用法?Java TableScanOperator怎么用?Java TableScanOperator使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



TableScanOperator类属于org.apache.hadoop.hive.ql.exec包,在下文中一共展示了TableScanOperator类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: estimate

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
@Override
public Estimation estimate(JobConf job, TableScanOperator ts, long remaining) throws HiveException {
	String hiveTableName = ts.getConf().getTableMetadata().getTableName();
	int reducerCount = job.getInt(hiveTableName + PhoenixStorageHandlerConstants.PHOENIX_REDUCER_NUMBER, 1);
	
	if (LOG.isDebugEnabled()) {
		LOG.debug("<<<<<<<<<< tableName : " + hiveTableName + " >>>>>>>>>>" );
		LOG.debug("<<<<<<<<<< reducer count : " + reducerCount + " >>>>>>>>>>");
		LOG.debug("<<<<<<<<<< remaining : " + remaining + " >>>>>>>>>>");
	}
	
	long bytesPerReducer = job.getLong(HiveConf.ConfVars.BYTESPERREDUCER.varname, Long.parseLong(HiveConf.ConfVars.BYTESPERREDUCER.getDefaultValue()));
	long totalLength = reducerCount * bytesPerReducer;
	
	return new Estimation(0, totalLength);
}
 
开发者ID:mini666,项目名称:hive-phoenix-handler,代码行数:17,代码来源:PhoenixStorageHandler.java


示例2: addSplitsForGroup

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
private void addSplitsForGroup(List<Path> dirs, TableScanOperator tableScan, JobConf conf,
    InputFormat inputFormat, Class<? extends InputFormat> inputFormatClass, int splits,
    TableDesc table, List<InputSplit> result) throws IOException {

  Utilities.copyTablePropertiesToConf(table, conf);

  if (tableScan != null) {
    pushFilters(conf, tableScan);
  }

  FileInputFormat.setInputPaths(conf, dirs.toArray(new Path[dirs.size()]));
  conf.setInputFormat(inputFormat.getClass());

  int headerCount = 0;
  int footerCount = 0;
  if (table != null) {
    headerCount = Utilities.getHeaderCount(table);
    footerCount = Utilities.getFooterCount(table, conf);
    if (headerCount != 0 || footerCount != 0) {
      // Input file has header or footer, cannot be splitted.
      conf.setLong(
          ShimLoader.getHadoopShims().getHadoopConfNames().get("MAPREDMINSPLITSIZE"),
          Long.MAX_VALUE);
    }
  }

  InputSplit[] iss = inputFormat.getSplits(conf, splits);
  for (InputSplit is : iss) {
    result.add(new HiveInputSplit(is, inputFormatClass.getName()));
  }
}
 
开发者ID:mini666,项目名称:hive-phoenix-handler,代码行数:32,代码来源:HiveInputFormat.java


示例3: pushFilters

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
private void pushFilters(final JobConf jobConf, final TableScanOperator tableScan) {

    final TableScanDesc scanDesc = tableScan.getConf();
    if (scanDesc == null) {
      LOG.debug("Not pushing filters because TableScanDesc is null");
      return;
    }

    // construct column name list for reference by filter push down
    Utilities.setColumnNameList(jobConf, tableScan);

    // push down filters
    final ExprNodeDesc filterExpr = scanDesc.getFilterExpr();
    if (filterExpr == null) {
      LOG.debug("Not pushing filters because FilterExpr is null");
      return;
    }

    final String filterText = filterExpr.getExprString();
    final String filterExprSerialized = Utilities.serializeExpression(filterExpr);
    jobConf.set(
            TableScanDesc.FILTER_TEXT_CONF_STR,
            filterText);
    jobConf.set(
            TableScanDesc.FILTER_EXPR_CONF_STR,
            filterExprSerialized);
  }
 
开发者ID:apache,项目名称:parquet-mr,代码行数:28,代码来源:Hive012Binding.java


示例4: pushFilters

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
public static void pushFilters(JobConf jobConf, TableScanOperator tableScan) {
  // ensure filters are not set from previous pushFilters
  jobConf.unset(TableScanDesc.FILTER_TEXT_CONF_STR);
  jobConf.unset(TableScanDesc.FILTER_EXPR_CONF_STR);
  TableScanDesc scanDesc = tableScan.getConf();
  if (scanDesc == null) {
    return;
  }
  
  // 2015-10-27 Added by JeongMin Ju
  //////////////////////////////////////////////////////////////////////
  setReadColumns(jobConf, tableScan);
  //////////////////////////////////////////////////////////////////////
  
  // construct column name list and types for reference by filter push down
  Utilities.setColumnNameList(jobConf, tableScan);
  Utilities.setColumnTypeList(jobConf, tableScan);
  // push down filters
  ExprNodeGenericFuncDesc filterExpr = (ExprNodeGenericFuncDesc)scanDesc.getFilterExpr();
  if (filterExpr == null) {
    return;
  }

  Serializable filterObject = scanDesc.getFilterObject();
  if (filterObject != null) {
    jobConf.set(TableScanDesc.FILTER_OBJECT_CONF_STR, Utilities.serializeObject(filterObject));
  }

  String filterText = filterExpr.getExprString();
  String filterExprSerialized = Utilities.serializeExpression(filterExpr);
  if (LOG.isDebugEnabled()) {
    LOG.debug("Filter text = " + filterText);
    LOG.debug("Filter expression = " + filterExprSerialized);
  }
  jobConf.set(
    TableScanDesc.FILTER_TEXT_CONF_STR,
    filterText);
  jobConf.set(
    TableScanDesc.FILTER_EXPR_CONF_STR,
    filterExprSerialized);
}
 
开发者ID:mini666,项目名称:hive-phoenix-handler,代码行数:42,代码来源:HiveInputFormat.java


示例5: pushProjectionsAndFilters

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
protected void pushProjectionsAndFilters(JobConf jobConf, Class inputFormatClass,
    String splitPath, String splitPathWithNoSchema, boolean nonNative) {
  if (this.mrwork == null) {
    init(job);
  }

  if(this.mrwork.getPathToAliases() == null) {
    return;
  }

  ArrayList<String> aliases = new ArrayList<String>();
  Iterator<Entry<String, ArrayList<String>>> iterator = this.mrwork
      .getPathToAliases().entrySet().iterator();

  while (iterator.hasNext()) {
    Entry<String, ArrayList<String>> entry = iterator.next();
    String key = entry.getKey();
    boolean match;
    if (nonNative) {
      // For non-native tables, we need to do an exact match to avoid
      // HIVE-1903.  (The table location contains no files, and the string
      // representation of its path does not have a trailing slash.)
      match =
        splitPath.equals(key) || splitPathWithNoSchema.equals(key);
    } else {
      // But for native tables, we need to do a prefix match for
      // subdirectories.  (Unlike non-native tables, prefix mixups don't seem
      // to be a potential problem here since we are always dealing with the
      // path to something deeper than the table location.)
      match =
        splitPath.startsWith(key) || splitPathWithNoSchema.startsWith(key);
    }
    if (match) {
      ArrayList<String> list = entry.getValue();
      for (String val : list) {
        aliases.add(val);
      }
    }
  }

  for (String alias : aliases) {
    Operator<? extends OperatorDesc> op = this.mrwork.getAliasToWork().get(
      alias);
    if (op instanceof TableScanOperator) {
      TableScanOperator ts = (TableScanOperator) op;
      // push down projections.
      ColumnProjectionUtils.appendReadColumns(
          jobConf, ts.getNeededColumnIDs(), ts.getNeededColumns());
      // push down filters
      pushFilters(jobConf, ts);
    }
  }
}
 
开发者ID:mini666,项目名称:hive-phoenix-handler,代码行数:54,代码来源:HiveInputFormat.java


示例6: setReadColumns

import org.apache.hadoop.hive.ql.exec.TableScanOperator; //导入依赖的package包/类
/**
	 * Set read columns.
	 * In case of select query(select t1.c1, t1.c2, ... from t1, t2 where t1.c1 = t2.c1), it can not read target columns at InputFormat class because columns of t1 table are not set.
	 * therefore, set to force read columns.
*
	 * @author JeongMin Ju
* @param jobConf
* @param tableScan
* @create-date 2016-01-15
	 */
	private static void setReadColumns(JobConf jobConf, TableScanOperator tableScan) {
		List<String> readColumnList = tableScan.getNeededColumns();
if (readColumnList != null) {
	jobConf.set(ColumnProjectionUtils.READ_COLUMN_NAMES_CONF_STR, Joiner.on(",").join(readColumnList));
}
	}
 
开发者ID:mini666,项目名称:hive-phoenix-handler,代码行数:17,代码来源:HiveInputFormat.java



注:本文中的org.apache.hadoop.hive.ql.exec.TableScanOperator类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java RemoteDebugger类代码示例发布时间:2022-05-23
下一篇:
Java GL2ES1类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap