利用Java連接Hadoop進(jìn)行編程
實驗環(huán)境
- hadoop版本:3.3.2
- jdk版本:1.8
- hadoop安裝系統(tǒng):ubuntu18.04
- 編程環(huán)境:IDEA
- 編程主機(jī):windows
實驗內(nèi)容
測試Java遠(yuǎn)程連接hadoop
創(chuàng)建maven工程,引入以下依賴:
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>RELEASE</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.3.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.3.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>虛擬機(jī)的/etc/hosts配置

hdfs-site.xml配置
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/rDesk/hadoop-3.3.2/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>VM-12-11-ubuntu:50010</value>
</property>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/rDesk/hadoop-3.3.2/tmp/dfs/data</value>
</property>
</configuration>
core-site.xml配置
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/rDesk/hadoop-3.3.2/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://VM-12-11-ubuntu:9000</value>
</property>
</configuration>
啟動hadoop
sbin/start-dfs.sh
主機(jī)的hosts(C:\Windows\System32\drivers\etc)文件配置

嘗試連接到虛擬機(jī)的hadoop并讀取文件內(nèi)容,這里我讀取hdfs下的/root/iinput文件內(nèi)容

Java代碼:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
public class TestConnectHadoop {
public static void main(String[] args) throws Exception {
String hostname = "VM-12-11-ubuntu";
String HDFS_PATH = "hdfs://" + hostname + ":9000";
Configuration conf = new Configuration();
conf.set("fs.defaultFS", HDFS_PATH);
conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
conf.set("dfs.client.use.datanode.hostname", "true");
FileSystem fs = FileSystem.get(conf);
FileStatus[] fileStatuses = fs.listStatus(new Path("/"));
for (FileStatus fileStatus : fileStatuses) {
System.out.println(fileStatus.toString());
}
FileStatus fileStatus = fs.getFileStatus(new Path("/root/iinput"));
System.out.println(fileStatus.getOwner());
System.out.println(fileStatus.getGroup());
System.out.println(fileStatus.getPath());
FSDataInputStream open = fs.open(fileStatus.getPath());
byte[] buf = new byte[1024];
int n = -1;
StringBuilder sb = new StringBuilder();
while ((n = open.read(buf)) > 0) {
sb.append(new String(buf, 0, n));
}
System.out.println(sb);
}
}運行結(jié)果:

編程實現(xiàn)一個類“MyFSDataInputStream”,該類繼承“org.apache.hadoop.fs.FSDataInputStream",要求如下: ①實現(xiàn)按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本
思路:emmm我的思路比較簡單,只適用于該要求,僅作參考。
將所有的數(shù)據(jù)讀取出來存儲起來,然后根據(jù)換行符進(jìn)行拆分,將拆分的字符串?dāng)?shù)組存儲起來,用于readline返回
Java代碼
import org.apache.hadoop.fs.FSDataInputStream;
import java.io.IOException;
import java.io.InputStream;
public class MyFSDataInputStream extends FSDataInputStream {
private String data = null;
private String[] lines = null;
private int count = 0;
private FSDataInputStream in;
public MyFSDataInputStream(InputStream in) throws IOException {
super(in);
this.in = (FSDataInputStream) in;
init();
}
private void init() throws IOException {
byte[] buf = new byte[1024];
int n = -1;
StringBuilder sb = new StringBuilder();
while ((n = this.in.read(buf)) > 0) {
sb.append(new String(buf, 0, n));
}
data = sb.toString();
lines = data.split("\n");
}
/**
* 實現(xiàn)按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本
*/
public String read_line() {
return count < lines.length ? lines[count++] : null;
}
}測試類:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
public class TestConnectHadoop {
public static void main(String[] args) throws Exception {
String hostname = "VM-12-11-ubuntu";
String HDFS_PATH = "hdfs://" + hostname + ":9000";
Configuration conf = new Configuration();
conf.set("fs.defaultFS", HDFS_PATH);
conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
conf.set("dfs.client.use.datanode.hostname", "true");
FileSystem fs = FileSystem.get(conf);
FileStatus fileStatus = fs.getFileStatus(new Path("/root/iinput"));
System.out.println(fileStatus.getOwner());
System.out.println(fileStatus.getGroup());
System.out.println(fileStatus.getPath());
FSDataInputStream open = fs.open(fileStatus.getPath());
MyFSDataInputStream myFSDataInputStream = new MyFSDataInputStream(open);
String line = null;
int count = 0;
while ((line = myFSDataInputStream.read_line()) != null ) {
System.out.printf("line %d is: %s\n", count++, line);
}
System.out.println("end");
}
}運行結(jié)果:

②實現(xiàn)緩存功能,即利用”MyFSDataInputStream“讀取若干字節(jié)數(shù)據(jù)時,首先查找緩存,如果緩存中有所需要數(shù)據(jù),則直接由緩存提供,否則從HDFS中讀取數(shù)據(jù)
import org.apache.hadoop.fs.FSDataInputStream;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
public class MyFSDataInputStream extends FSDataInputStream {
private BufferedInputStream buffer;
private String[] lines = null;
private int count = 0;
private FSDataInputStream in;
public MyFSDataInputStream(InputStream in) throws IOException {
super(in);
this.in = (FSDataInputStream) in;
init();
}
private void init() throws IOException {
byte[] buf = new byte[1024];
int n = -1;
StringBuilder sb = new StringBuilder();
while ((n = this.in.read(buf)) > 0) {
sb.append(new String(buf, 0, n));
}
//緩存數(shù)據(jù)讀取
buffer = new BufferedInputStream(this.in);
lines = sb.toString().split("\n");
}
/**
* 實現(xiàn)按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本
*/
public String read_line() {
return count < lines.length ? lines[count++] : null;
}
@Override
public int read() throws IOException {
return this.buffer.read();
}
public int readWithBuf(byte[] buf, int offset, int len) throws IOException {
return this.buffer.read(buf, offset, len);
}
public int readWithBuf(byte[] buf) throws IOException {
return this.buffer.read(buf);
}
}到此這篇關(guān)于利用Java連接Hadoop進(jìn)行編程的文章就介紹到這了,更多相關(guān)Java連接Hadoop內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
深入了解Spring Boot2.3.0及以上版本的Liveness和Readiness功能
這篇文章主要介紹了Spring Boot2.3.0及以上版本的Liveness和Readiness功能示例深入解析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-10-10
Java elasticSearch-api的具體操作步驟講解
這篇文章主要介紹了elasticSearch-api的具體操作步驟講解,本文通過詳細(xì)的步驟介紹和圖文代碼展示講解了該項技術(shù),需要的朋友可以參考下2021-06-06
Java使用Soap方式調(diào)用WebService接口代碼示例
Java調(diào)用WebService接口是指通過Java語言來訪問并與WebService進(jìn)行交互,WebService是一種基于Web的服務(wù)架構(gòu),它通過標(biāo)準(zhǔn)的XML和HTTP協(xié)議來提供服務(wù),這篇文章主要給大家介紹了關(guān)于Java使用Soap方式調(diào)用WebService接口的相關(guān)資料,需要的朋友可以參考下2024-03-03
Java實現(xiàn)短信驗證碼和國際短信群發(fā)功能的示例
本篇文章主要介紹了Java實現(xiàn)短信驗證碼和國際短信群發(fā)功能的示例,具有一定的參考價值,感興趣的小伙伴們可以參考一下。2017-02-02
IDEA連接mysql數(shù)據(jù)庫報錯的解決方法
這篇文章主要介紹了IDEA連接mysql數(shù)據(jù)庫報錯的解決方法,文中有非常詳細(xì)的圖文示例,對出現(xiàn)Server returns invalid timezone. Go to ‘Advanced‘ tab and set ‘serverTimezone‘ prope報錯的小伙伴們很有幫助喲,需要的朋友可以參考下2021-05-05
idea下如何設(shè)置項目啟動的JVM運行內(nèi)存大小
這篇文章主要介紹了idea下如何設(shè)置項目啟動的JVM運行內(nèi)存大小問題,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教2023-12-12
SpringBoot整合WebSocket實現(xiàn)聊天室流程全解
WebSocket協(xié)議是基于TCP的一種新的網(wǎng)絡(luò)協(xié)議。本文將通過SpringBoot集成WebSocket實現(xiàn)簡易聊天室,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,感興趣的可以了解一下2023-01-01

