TF-IDF理解及其Java實現(xiàn)代碼實例
TF-IDF
前言
前段時間,又具體看了自己以前整理的TF-IDF,這里把它發(fā)布在博客上,知識就是需要不斷的重復的,否則就感覺生疏了。
TF-IDF理解
TF-IDF(term frequency–inverse document frequency)是一種用于資訊檢索與資訊探勘的常用加權技術, TFIDF的主要思想是:如果某個詞或短語在一篇文章中出現(xiàn)的頻率TF高,并且在其他文章中很少出現(xiàn),則認為此詞或者短語具有很好的類別區(qū)分能力,適合用來分類。TFIDF實際上是:TF * IDF,TF詞頻(Term Frequency),IDF反文檔頻率(Inverse Document Frequency)。TF表示詞條在文檔d中出現(xiàn)的頻率。IDF的主要思想是:如果包含詞條t的文檔越少,也就是n越小,IDF越大,則說明詞條t具有很好的類別區(qū)分能力。如果某一類文檔C中包含詞條t的文檔數(shù)為m,而其它類包含t的文檔總數(shù)為k,顯然所有包含t的文檔數(shù)n=m + k,當m大的時候,n也大,按照IDF公式得到的IDF的值會小,就說明該詞條t類別區(qū)分能力不強。但是實際上,如果一個詞條在一個類的文檔中頻繁出現(xiàn),則說明該詞條能夠很好代表這個類的文本的特征,這樣的詞條應該給它們賦予較高的權重,并選來作為該類文本的特征詞以區(qū)別與其它類文檔。這就是IDF的不足之處.
TF公式:

以上式子中
是該詞在文件
中的出現(xiàn)次數(shù),而分母則是在文件
中所有字詞的出現(xiàn)次數(shù)之和。
IDF公式:

|D|:語料庫中的文件總數(shù)
:包含詞語 ti 的文件數(shù)目(即 ni,j不等于0的文件數(shù)目)如果該詞語不在語料庫中,就會導致被除數(shù)為零,因此一般情況下使用

然后

TF-IDF實現(xiàn)(Java)
這里采用了外部插件IKAnalyzer-2012.jar,用其進行分詞
具體代碼如下:
package tfidf;
import java.io.*;
import java.util.*;
import org.wltea.analyzer.lucene.IKAnalyzer;
public class ReadFiles {
/**
* @param args
*/
private static ArrayList<String> FileList = new ArrayList<String>();
// the list of file
//get list of file for the directory, including sub-directory of it
public static List<String> readDirs(String filepath) throws FileNotFoundException, IOException
{
try
{
File file = new File(filepath);
if(!file.isDirectory())
{
System.out.println("輸入的[]");
System.out.println("filepath:" + file.getAbsolutePath());
} else
{
String[] flist = file.list();
for (int i = 0; i < flist.length; i++)
{
File newfile = new File(filepath + "\\" + flist[i]);
if(!newfile.isDirectory())
{
FileList.add(newfile.getAbsolutePath());
} else if(newfile.isDirectory()) //if file is a directory, call ReadDirs
{
readDirs(filepath + "\\" + flist[i]);
}
}
}
}
catch(FileNotFoundException e)
{
System.out.println(e.getMessage());
}
return FileList;
}
//read file
public static String readFile(String file) throws FileNotFoundException, IOException
{
StringBuffer strSb = new StringBuffer();
//String is constant, StringBuffer can be changed.
InputStreamReader inStrR = new InputStreamReader(new FileInputStream(file), "gbk");
//byte streams to character streams
BufferedReader br = new BufferedReader(inStrR);
String line = br.readLine();
while(line != null){
strSb.append(line).append("\r\n");
line = br.readLine();
}
return strSb.toString();
}
//word segmentation
public static ArrayList<String> cutWords(String file) throws IOException{
ArrayList<String> words = new ArrayList<String>();
String text = ReadFiles.readFile(file);
IKAnalyzer analyzer = new IKAnalyzer();
words = analyzer.split(text);
return words;
}
//term frequency in a file, times for each word
public static HashMap<String, Integer> normalTF(ArrayList<String> cutwords){
HashMap<String, Integer> resTF = new HashMap<String, Integer>();
for (String word : cutwords){
if(resTF.get(word) == null){
resTF.put(word, 1);
System.out.println(word);
} else{
resTF.put(word, resTF.get(word) + 1);
System.out.println(word.toString());
}
}
return resTF;
}
//term frequency in a file, frequency of each word
public static HashMap<String, float> tf(ArrayList<String> cutwords){
HashMap<String, float> resTF = new HashMap<String, float>();
int wordLen = cutwords.size();
HashMap<String, Integer> intTF = ReadFiles.normalTF(cutwords);
Iterator iter = intTF.entrySet().iterator();
//iterator for that get from TF
while(iter.hasNext()){
Map.Entry entry = (Map.Entry)iter.next();
resTF.put(entry.getKey().toString(), float.parsefloat(entry.getValue().toString()) / wordLen);
System.out.println(entry.getKey().toString() + " = "+ float.parsefloat(entry.getValue().toString()) / wordLen);
}
return resTF;
}
//tf times for file
public static HashMap<String, HashMap<String, Integer>> normalTFAllFiles(String dirc) throws IOException{
HashMap<String, HashMap<String, Integer>> allNormalTF = new HashMap<String, HashMap<String,Integer>>();
List<String> filelist = ReadFiles.readDirs(dirc);
for (String file : filelist){
HashMap<String, Integer> dict = new HashMap<String, Integer>();
ArrayList<String> cutwords = ReadFiles.cutWords(file);
//get cut word for one file
dict = ReadFiles.normalTF(cutwords);
allNormalTF.put(file, dict);
}
return allNormalTF;
}
//tf for all file
public static HashMap<String,HashMap<String, float>> tfAllFiles(String dirc) throws IOException{
HashMap<String, HashMap<String, float>> allTF = new HashMap<String, HashMap<String, float>>();
List<String> filelist = ReadFiles.readDirs(dirc);
for (String file : filelist){
HashMap<String, float> dict = new HashMap<String, float>();
ArrayList<String> cutwords = ReadFiles.cutWords(file);
//get cut words for one file
dict = ReadFiles.tf(cutwords);
allTF.put(file, dict);
}
return allTF;
}
public static HashMap<String, float> idf(HashMap<String,HashMap<String, float>> all_tf){
HashMap<String, float> resIdf = new HashMap<String, float>();
HashMap<String, Integer> dict = new HashMap<String, Integer>();
int docNum = FileList.size();
for (int i = 0; i < docNum; i++){
HashMap<String, float> temp = all_tf.get(FileList.get(i));
Iterator iter = temp.entrySet().iterator();
while(iter.hasNext()){
Map.Entry entry = (Map.Entry)iter.next();
String word = entry.getKey().toString();
if(dict.get(word) == null){
dict.put(word, 1);
} else {
dict.put(word, dict.get(word) + 1);
}
}
}
System.out.println("IDF for every word is:");
Iterator iter_dict = dict.entrySet().iterator();
while(iter_dict.hasNext()){
Map.Entry entry = (Map.Entry)iter_dict.next();
float value = (float)Math.log(docNum / float.parsefloat(entry.getValue().toString()));
resIdf.put(entry.getKey().toString(), value);
System.out.println(entry.getKey().toString() + " = " + value);
}
return resIdf;
}
public static void tf_idf(HashMap<String,HashMap<String, float>> all_tf,HashMap<String, float> idfs){
HashMap<String, HashMap<String, float>> resTfIdf = new HashMap<String, HashMap<String, float>>();
int docNum = FileList.size();
for (int i = 0; i < docNum; i++){
String filepath = FileList.get(i);
HashMap<String, float> tfidf = new HashMap<String, float>();
HashMap<String, float> temp = all_tf.get(filepath);
Iterator iter = temp.entrySet().iterator();
while(iter.hasNext()){
Map.Entry entry = (Map.Entry)iter.next();
String word = entry.getKey().toString();
float value = (float)float.parsefloat(entry.getValue().toString()) * idfs.get(word);
tfidf.put(word, value);
}
resTfIdf.put(filepath, tfidf);
}
System.out.println("TF-IDF for Every file is :");
DisTfIdf(resTfIdf);
}
public static void DisTfIdf(HashMap<String, HashMap<String, float>> tfidf){
Iterator iter1 = tfidf.entrySet().iterator();
while(iter1.hasNext()){
Map.Entry entrys = (Map.Entry)iter1.next();
System.out.println("FileName: " + entrys.getKey().toString());
System.out.print("{");
HashMap<String, float> temp = (HashMap<String, float>) entrys.getValue();
Iterator iter2 = temp.entrySet().iterator();
while(iter2.hasNext()){
Map.Entry entry = (Map.Entry)iter2.next();
System.out.print(entry.getKey().toString() + " = " + entry.getValue().toString() + ", ");
}
System.out.println("}");
}
}
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String file = "D:/testfiles";
HashMap<String,HashMap<String, float>> all_tf = tfAllFiles(file);
System.out.println();
HashMap<String, float> idfs = idf(all_tf);
System.out.println();
tf_idf(all_tf, idfs);
}
}
結果如下圖:

常見問題
沒有加入lucene jar包

lucene包和je包版本不適合

總結
以上就是本文關于TF-IDF理解及其Java實現(xiàn)代碼實例的全部內容,希望對大家有所幫助。感興趣的朋友可以繼續(xù)參閱本站:
如有不足之處,歡迎留言指出。
相關文章
controller接口跳轉到另一個controller接口的實現(xiàn)
這篇文章主要介紹了controller接口跳轉到另一個controller接口的實現(xiàn)方式,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2021-09-09
ajax實時監(jiān)測與springboot的實例分析
本文將介紹如何使用 AJAX 技術結合 Spring Boot 構建一個實時反饋用戶輸入的應用,我們將創(chuàng)建一個簡單的輸入框,當用戶在輸入框中鍵入文本時,應用將異步地向后端發(fā)送請求,感興趣的朋友跟隨小編一起看看吧2024-07-07
SpringBoot?整合?Elasticsearch?實現(xiàn)海量級數(shù)據(jù)搜索功能
這篇文章主要介紹了SpringBoot?整合?Elasticsearch?實現(xiàn)海量級數(shù)據(jù)搜索,本文主要圍繞?SpringBoot?整合?ElasticSearch?接受數(shù)據(jù)的插入和搜索使用技巧,在實際的使用過程中,版本號尤其的重要,不同版本的?es,對應的?api?是不一樣,需要的朋友可以參考下2022-07-07

