java实现从HDFS上下载文件及文件夹的功能,以流形式输出,便于用户自定义保存任何路径下

news/2024/5/20 5:30:36 标签: java, hdfs

在这里插入图片描述

文章目录

  • java实现下载hdfs文件及文件夹
    • 说明:java实现从HDFS上下载文件及文件夹的功能,以流形式输出,便于用户自定义保存任何路径下
  • 1.下载xxx文件
  • 2.下载xx文件夹

javahdfs_2">java实现下载hdfs文件及文件夹

javaHDFS_6">说明:java实现从HDFS上下载文件及文件夹的功能,以流形式输出,便于用户自定义保存任何路径下

java"> <!--阿里 FastJson依赖-->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>3.1.1</version>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.1.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.1.1</version>
        </dependency>

相关类引入jar包,代码上方查看对照即可

1.下载xxx文件

“下载文件” 执行流程说明:
			1.构建hdfs连接,初始化Configuration
			2.获取文件输入流FSDataInputStream,调用downloadFile()
			3.方法内部先设置header请求头,格式以文件名(convertFileName(fileName))输出文件,然后输出流内部信息以流的形式输出
java">import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.InputStreamResource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import util.ExportUtil;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
/**
     *  下载文件
     * @author liudz
     * @date 2020/6/9
     * @return 执行结果
     **/
    @RequestMapping(value = "/down", method = RequestMethod.GET)
    public ResponseEntity<InputStreamResource> Test01() throws URISyntaxException, IOException {
    	//下面两行,初始化hdfs配置连接
        Configuration conf = new Configuration();
        FileSystem fs = FileSystem.get(new URI("hdfs://172.16.1.9:8020"), conf);
        FSDataInputStream inputStream = fs.open(new Path("hdfs://172.16.1.9:8020/spark/testLog.txt"));
        ResponseEntity<InputStreamResource> result = ExportUtil.downloadFile(inputStream, "testLog.txt");
        return result;
    }
java">import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;

import lombok.extern.slf4j.Slf4j;

import org.springframework.core.io.FileSystemResource;
import org.springframework.core.io.InputStreamResource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
/**
     * 文件以流的形式读取
     * 
     * @param in 字符输入流
     * @param fileName 文件名字
     * @return 返回结果
     */
    public static ResponseEntity<InputStreamResource> downloadFile(InputStream in, String fileName) {

        try {
            byte[] testBytes = new byte[in.available()];
            HttpHeaders headers = new HttpHeaders();
            headers.add("Cache-Control", "no-cache, no-store, must-revalidate");
            headers.add("Content-Disposition", String.format("attachment; filename=\"%s\"", convertFileName(fileName)));
            headers.add("Pragma", "no-cache");
            headers.add("Expires", "0");
            headers.add("Content-Language", "UTF-8");
            //最终这句,让文件内容以流的形式输出
            return ResponseEntity.ok().headers(headers).contentLength(testBytes.length)
                .contentType(MediaType.parseMediaType("application/octet-stream")).body(new InputStreamResource(in));
        } catch (IOException e) {
            log.info("downfile is error" + e.getMessage());
        }
        log.info("file is null" + fileName);
        return null;
    }

2.下载xx文件夹

“下载文件夹及内部文件” 执行流程说明:
	1.初始化header请求头信息,格式以xx.zip输出文件夹,调用down2()
	2.构建hdfs连接,初始化Configuration
	3.调用迭代器compress,传入参数(文件夹整体路径 + ZipOutputStream实例 + FileSystem实例)
	4.迭代器执行思路:
			遍历对应子目录:1)如果为文件夹,zip写入一个文件进入点(路径末尾单词 + “/”)
                          2)如果为文件,zip写入文件(目录文件的整体路径)
                          
----------------------------------------------------------------------------------------                      
******注意:容易出错2行代码:******
压缩文件:zipOutputStream.putNextEntry(new ZipEntry(name.substring(1)));
压缩文件夹:zipOutputStream.putNextEntry(new ZipEntry(fileStatulist[i].getPath().getName() + "/"));
**name属性用于zip创建文件,fileStatulist[i].getPath().getName()用于zip创建文件夹**
-----------------------------------------------------------------------------------------
举例说明:
	假设文件夹spark-warehouse路径下有2文件夹data1和data2,文件夹下各一个a.txt文本文件
	第一步:获取路径“C:/Users/liudz/Desktop/spark-warehouse”下的目录,也就是(C:/Users/liudz/Desktop/spark-warehouse/data1、C:/Users/liudz/Desktop/spark-warehouse/data2)
	lastName=spark-warehouse
	name=/spark-warehouse/data1
	判断“C:/Users/liudz/Desktop/spark-warehouse/data1”为目录,zip写入“data1/”文件夹
	第二步:获取路径“C:/Users/liudz/Desktop/spark-warehouse/data1”下的目录,也就是(C:/Users/liudz/Desktop/spark-warehouse/data1/a.txt)
	lastName=data1
	name=/data1/a.txt
	判断“C:/Users/liudz/Desktop/spark-warehouse/data1/a.txt”为文件,zip写入“data1/a。txt”文件
	。
	。
	。
java">import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.InputStreamResource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import util.ExportUtil;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
/**
     *  下载文件夹
     * @param businessId 业务ID
     * @author liudz
     * @date 2020/6/9
     * @return 执行结果
     **/
    @RequestMapping(value = "/downloadFolder", method = RequestMethod.GET)
    public ResponseEntity<byte[]> downloadFolder(Long businessId) throws IOException {
        ResponseEntity<byte[]> response = null;
        HttpHeaders headers = new HttpHeaders();
        headers.add("Cache-Control", "no-cache, no-store, must-revalidate");
        headers.add("Content-Disposition", "attachment; filename=spark-warehouse.zip");
        headers.add("Pragma", "no-cache");
        headers.add("Expires", "0");
        headers.add("Content-Language", "UTF-8");
        ByteArrayOutputStream zos =
                (ByteArrayOutputStream) hdfsClientService.down2("hdfs://172.16.1.9:8020/spark/spark-warehouse");
        byte[] out = zos.toByteArray();
        zos.close();
        response = new ResponseEntity<>(out, headers, HttpStatus.OK);

        return response;
    }
java">import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;

import lombok.extern.slf4j.Slf4j;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.springframework.stereotype.Service;
/**
     * 多文件
     * 
     * @param cloudPath
     *            cloudPath
     * @author liudz
     * @date 2020/6/8
     * @return 执行结果
     **/
    public OutputStream down2(String cloudPath) {
        // 1获取对象
        ByteArrayOutputStream out = null;
        try {
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(new URI("hdfs://172.16.1.9:8020"), conf);
            out = new ByteArrayOutputStream();
            ZipOutputStream zos = new ZipOutputStream(out);
            compress(cloudPath, zos, fs);
            zos.close();
        } catch (IOException e) {
            log.info("----error:{}----" + e.getMessage());
        } catch (URISyntaxException e) {
            log.info("----error:{}----" + e.getMessage());
        }
        return out;
    }
java">import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;

import lombok.extern.slf4j.Slf4j;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.springframework.stereotype.Service;
/**
     * compress
     * 
     * @param baseDir
     *            baseDir
     * @param zipOutputStream
     *            zipOutputStream
     * @param fs
     *            fs
     * @author liudz
     * @date 2020/6/8
     **/
    public void compress(String baseDir, ZipOutputStream zipOutputStream, FileSystem fs) throws IOException {

        try {
            FileStatus[] fileStatulist = fs.listStatus(new Path(baseDir));
            log.info("basedir = " + baseDir);
            String[] strs = baseDir.split("/");
            //lastName代表路径最后的单词
            String lastName = strs[strs.length - 1];

            for (int i = 0; i < fileStatulist.length; i++) {

                String name = fileStatulist[i].getPath().toString();
                name = name.substring(name.indexOf("/" + lastName));

                if (fileStatulist[i].isFile()) {
                    Path path = fileStatulist[i].getPath();
                    FSDataInputStream inputStream = fs.open(path);
                    zipOutputStream.putNextEntry(new ZipEntry(name.substring(1)));
                    IOUtils.copyBytes(inputStream, zipOutputStream, Integer.parseInt("1024"));
                    inputStream.close();
                } else {
                    zipOutputStream.putNextEntry(new ZipEntry(fileStatulist[i].getPath().getName() + "/"));
                    log.info("fileStatulist[i].getPath().toString() = " + fileStatulist[i].getPath().toString());
                    compress(fileStatulist[i].getPath().toString(), zipOutputStream, fs);
                }
            }
        } catch (IOException e) {
            log.info("----error:{}----" + e.getMessage());
        }
    }

http://www.niftyadmin.cn/n/898360.html

相关文章

php 自动生成热点链接,php 自动将url生成链接,然后提取title

1 用户输入的URL在前端用jQuery进行简单的URL合法性检查后,异步提交给PHP2 PHP检测URL是否合法,用Curl进行获取URL的内容3 PHP把的获取Title或出错信息,返回给前端jQuery.//文件编码为UTF-8(无BOM)error_reporting(E_ALL || E_STRICT);$url http://q我q.com;//首先判断用户输入…

SpringBoot配置多数据源实战

文章目录SpringBoot配置多数据源实战需求来源&#xff1a;简单粗暴3步使用步骤&#xff1a;思路讲解&#xff1a;目录结构&#xff1a;使用注意点&#xff1a;SpringBoot配置多数据源实战 需求来源&#xff1a; 当相关业务场景想实现同时操作2个甚至多个不同数据库表的时候&a…

php使用keras模型,keras的load_model实现加载含有参数的自定义模型

网上的教程大多数是教大家如何加载自定义模型和函数&#xff0c;如下图这个SelfAttention层是在训练过程自己定义的一个class&#xff0c;但如果要加载这个自定义层&#xff0c;需要在load_model里添加custom_objects字典&#xff0c;这个自定义的类&#xff0c;不要用import &…

php 精湛技术,【名医风采】范学民:用精湛技术温暖患者

“医之为道大矣&#xff0c;医之为任重矣。”这是偃师市人民医院耳鼻喉科主任范学民心中的准则。从1999年毕业于河南医科大学&#xff0c;开始从事耳鼻喉科医疗工作已经近20年。图片&#xff1a;范学民-- 耳鼻喉科主任 主治医师.jpg这20年中&#xff0c;范学民一直从事的是耳鼻…

oracle10503事件,library cache: mutex X等待事件, blocker session on cpu

library cache: mutex X等待事件是当一个会话以 exclusive mode持有library cache mutex时&#xff0c;另一个会话当前无论何时申请此mutex时必须等待其释放。很多时候对 library cache做不同的操作时都需要申请一个mutex, 所以最重要的是确认mutex的位置”location”, 该位置…

java中“==”和equals,究竟比的是什么

文章目录结论:具体说明&#xff1a;结论: 1&#xff09;对于&#xff0c;如果作用于基本数据类型的变量&#xff0c;则直接比较其存储的 “值”是否相等&#xff1b; 如果作用于引用类型的变量&#xff0c;则比较的是所指向的对象的地址 2&#xff09;对于equals方法&#x…

oracle进程间共享内存,oracle自动共享内存管理(ASMM)

oracle自动共享内存管理(ASMM)从Oracle10g开始&#xff0c;Oracle提供了自动SGA的管理(简称ASMM&#xff0c;即Automatic Shared MemoryManagement)新特性。所谓ASMM&#xff0c;就是指我们不再需要手工设置shared pool、bufferpool等若干内存池的大小&#xff0c;而是为SGA设置…

SparkSubmit提交任务到yarn及报错解决方案

文章目录一、提交任务代码二、Linux提交可能出现的问题及解决方案情况1&#xff1a;JSON解析异常情况2&#xff1a;java.lang.InstantiationException spark.sql.driver情况3 中kafka&#xff1a;java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Callback情…