• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java ShortCircuitShmResponseProto类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto的典型用法代码示例。如果您正苦于以下问题:Java ShortCircuitShmResponseProto类的具体用法?Java ShortCircuitShmResponseProto怎么用?Java ShortCircuitShmResponseProto使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ShortCircuitShmResponseProto类属于org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos包,在下文中一共展示了ShortCircuitShmResponseProto类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: sendShmSuccessResponse

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
private void sendShmSuccessResponse(DomainSocket sock, NewShmInfo shmInfo)
    throws IOException {
  DataNodeFaultInjector.get().sendShortCircuitShmResponse();
  ShortCircuitShmResponseProto.newBuilder().setStatus(SUCCESS).
      setId(PBHelper.convert(shmInfo.shmId)).build().
      writeDelimitedTo(socketOut);
  // Send the file descriptor for the shared memory segment.
  byte buf[] = new byte[] { (byte)0 };
  FileDescriptor shmFdArray[] =
      new FileDescriptor[] { shmInfo.stream.getFD() };
  sock.sendFileDescriptors(shmFdArray, buf, 0, buf.length);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:DataXceiver.java


示例2: sendShmSuccessResponse

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
private void sendShmSuccessResponse(DomainSocket sock, NewShmInfo shmInfo)
    throws IOException {
  DataNodeFaultInjector.get().sendShortCircuitShmResponse();
  ShortCircuitShmResponseProto.newBuilder().setStatus(SUCCESS).
      setId(PBHelperClient.convert(shmInfo.shmId)).build().
      writeDelimitedTo(socketOut);
  // Send the file descriptor for the shared memory segment.
  byte buf[] = new byte[] { (byte)0 };
  FileDescriptor shmFdArray[] =
      new FileDescriptor[] { shmInfo.stream.getFD() };
  sock.sendFileDescriptors(shmFdArray, buf, 0, buf.length);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:13,代码来源:DataXceiver.java


示例3: sendShmSuccessResponse

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
private void sendShmSuccessResponse(DomainSocket sock, NewShmInfo shmInfo)
    throws IOException {
  ShortCircuitShmResponseProto.newBuilder().setStatus(SUCCESS).
      setId(PBHelper.convert(shmInfo.shmId)).build().
      writeDelimitedTo(socketOut);
  // Send the file descriptor for the shared memory segment.
  byte buf[] = new byte[] { (byte)0 };
  FileDescriptor shmFdArray[] =
      new FileDescriptor[] { shmInfo.stream.getFD() };
  sock.sendFileDescriptors(shmFdArray, buf, 0, buf.length);
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:12,代码来源:DataXceiver.java


示例4: requestNewShm

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
/**
 * Ask the DataNode for a new shared memory segment.  This function must be
 * called with the manager lock held.  We will release the lock while
 * communicating with the DataNode.
 *
 * @param clientName    The current client name.
 * @param peer          The peer to use to talk to the DataNode.
 *
 * @return              Null if the DataNode does not support shared memory
 *                        segments, or experienced an error creating the
 *                        shm.  The shared memory segment itself on success.
 * @throws IOException  If there was an error communicating over the socket.
 *                        We will not throw an IOException unless the socket
 *                        itself (or the network) is the problem.
 */
private DfsClientShm requestNewShm(String clientName, DomainPeer peer)
    throws IOException {
  final DataOutputStream out = 
      new DataOutputStream(
          new BufferedOutputStream(peer.getOutputStream()));
  new Sender(out).requestShortCircuitShm(clientName);
  ShortCircuitShmResponseProto resp = 
      ShortCircuitShmResponseProto.parseFrom(
          PBHelper.vintPrefixed(peer.getInputStream()));
  String error = resp.hasError() ? resp.getError() : "(unknown)";
  switch (resp.getStatus()) {
  case SUCCESS:
    DomainSocket sock = peer.getDomainSocket();
    byte buf[] = new byte[1];
    FileInputStream fis[] = new FileInputStream[1];
    if (sock.recvFileInputStreams(fis, buf, 0, buf.length) < 0) {
      throw new EOFException("got EOF while trying to transfer the " +
          "file descriptor for the shared memory segment.");
    }
    if (fis[0] == null) {
      throw new IOException("the datanode " + datanode + " failed to " +
          "pass a file descriptor for the shared memory segment.");
    }
    try {
      DfsClientShm shm = 
          new DfsClientShm(PBHelper.convert(resp.getId()),
              fis[0], this, peer);
      if (LOG.isTraceEnabled()) {
        LOG.trace(this + ": createNewShm: created " + shm);
      }
      return shm;
    } finally {
      IOUtils.cleanup(LOG,  fis[0]);
    }
  case ERROR_UNSUPPORTED:
    // The DataNode just does not support short-circuit shared memory
    // access, and we should stop asking.
    LOG.info(this + ": datanode does not support short-circuit " +
        "shared memory access: " + error);
    disabled = true;
    return null;
  default:
    // The datanode experienced some kind of unexpected error when trying to
    // create the short-circuit shared memory segment.
    LOG.warn(this + ": error requesting short-circuit shared memory " +
        "access: " + error);
    return null;
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:65,代码来源:DfsClientShmManager.java


示例5: sendShmErrorResponse

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
private void sendShmErrorResponse(Status status, String error)
    throws IOException {
  ShortCircuitShmResponseProto.newBuilder().setStatus(status).
      setError(error).build().writeDelimitedTo(socketOut);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:DataXceiver.java


示例6: requestNewShm

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto; //导入依赖的package包/类
/**
 * Ask the DataNode for a new shared memory segment.  This function must be
 * called with the manager lock held.  We will release the lock while
 * communicating with the DataNode.
 *
 * @param clientName    The current client name.
 * @param peer          The peer to use to talk to the DataNode.
 *
 * @return              Null if the DataNode does not support shared memory
 *                        segments, or experienced an error creating the
 *                        shm.  The shared memory segment itself on success.
 * @throws IOException  If there was an error communicating over the socket.
 *                        We will not throw an IOException unless the socket
 *                        itself (or the network) is the problem.
 */
private DfsClientShm requestNewShm(String clientName, DomainPeer peer)
    throws IOException {
  final DataOutputStream out =
      new DataOutputStream(
          new BufferedOutputStream(peer.getOutputStream()));
  new Sender(out).requestShortCircuitShm(clientName);
  ShortCircuitShmResponseProto resp =
      ShortCircuitShmResponseProto.parseFrom(
        PBHelperClient.vintPrefixed(peer.getInputStream()));
  String error = resp.hasError() ? resp.getError() : "(unknown)";
  switch (resp.getStatus()) {
  case SUCCESS:
    DomainSocket sock = peer.getDomainSocket();
    byte buf[] = new byte[1];
    FileInputStream fis[] = new FileInputStream[1];
    if (sock.recvFileInputStreams(fis, buf, 0, buf.length) < 0) {
      throw new EOFException("got EOF while trying to transfer the " +
          "file descriptor for the shared memory segment.");
    }
    if (fis[0] == null) {
      throw new IOException("the datanode " + datanode + " failed to " +
          "pass a file descriptor for the shared memory segment.");
    }
    try {
      DfsClientShm shm =
          new DfsClientShm(PBHelperClient.convert(resp.getId()),
              fis[0], this, peer);
      LOG.trace("{}: createNewShm: created {}", this, shm);
      return shm;
    } finally {
      try {
        fis[0].close();
      } catch (Throwable e) {
        LOG.debug("Exception in closing " + fis[0], e);
      }
    }
  case ERROR_UNSUPPORTED:
    // The DataNode just does not support short-circuit shared memory
    // access, and we should stop asking.
    LOG.info(this + ": datanode does not support short-circuit " +
        "shared memory access: " + error);
    disabled = true;
    return null;
  default:
    // The datanode experienced some kind of unexpected error when trying to
    // create the short-circuit shared memory segment.
    LOG.warn(this + ": error requesting short-circuit shared memory " +
        "access: " + error);
    return null;
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:67,代码来源:DfsClientShmManager.java



注:本文中的org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmResponseProto类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ExprFunction1类代码示例发布时间:2022-05-23
下一篇:
Java GLCapabilitiesImmutable类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap