先理解
Shell命令行
●.启动Hadoop
shell">cd /usr/local/hadoop
./sbin/start-dfs.sh
⑴.将本地的house.txt文件上传到HDFS的mydir目录下
shell">./bin/hdfs dfs -put ./house.txt mydir
⑵.将HDFS的dir目录下的house.txt下载 到本地
shell">./bin/hdfs dfs -get mydir/house.txt file:///usr/local/hadoop
⑶.将HDFS中指定文件(loli.txt)内容输入到终端
shell">./bin/hdfs dfs -cat loli.txt
⑷.输出HDFS中指定文件(loli.txt)的读写权限、大小、创建时间、路径等信息
shell">./bin/hdfs dfs -ls -h loli.txt
⑸.创建目录(dir1/dir2)
shell">./bin/hdfs dfs -mkdir -p dir1/dir2
⑹.在上面的目录下创建文件(dir1/dir2/myfile)
shell">./bin/hdfs dfs -touchz dir1/dir2/myfile
⑺.输出HDFS中指定目录(dir1/dir2)下的所有文件的读写权限、大小、创建时间、路径等信息
shell">./bin/hdfs dfs -ls -R -h dir1/dir2
⑻.删除HDFS中的文件(dir1/dir2/myfile)
shell">./bin/hdfs dfs -rm dir1/dir2/myfile
⑼.删除HDFS中的目录(dir1/dir2)
shell">./bin/hdfs dfs -rm dir1/dir2
⑽.强制删除HDFS中的目录(dir1/dir2)
shell">./bin/hdfs dfs -rm -R dir1/dir2
⑾.HDFS中移动文件(loli1.txt -> loli2.txt)
shell">./bin/hdfs dfs -mv loli1.txt loli2.txt
还是Shell命令行
❶.将本地的house.txt文件上传到HDFS
(HDFS中叫做loli.txt;若HDFS中已存在则追加,不存在则正常上传)
shell">if $(./bin/hdfs dfs -test -e loli.txt);
then $(./bin/hdfs dfs -appendToFile house.txt loli.txt);
else $(./bin/hdfs dfs -copyFromLocal -f house.txt loli.txt);
fi
❷.从HDFS中下载loli.txt到本地
(本地中叫做house.txt;若本地中已存在house.txt,则下载时注意重命名为house2.txt)
shell">if $(./bin/hdfs dfs -test -e file:///usr/local/hadoop/house.txt);
then $(./bin/hdfs dfs -copyToLocal loli.txt ./house2.txt);
else $(./bin/hdfs dfs -copyToLocal loli.txt ./house.txt);
fi
❸.给定一个HDFS文件的路径(dir1/dir2)。若路径存在则创建文件(filename),若路径不存在则创建路径后再创建文件(filename)
shell">if $(./bin/hdfs dfs -test -d dir1/dir2);
then $(./bin/hdfs dfs -touchz dir1/dir2/filename);
else $(./bin/hdfs dfs -mkdir -p dir1/dir2 && ./bin/hdfs dfs -touchz dir1/dir2/filename);
fi
总结
如果比较熟悉Linux命令,这就非常简单。