Lucene5学习之SpanQuery跨度查询

2023-11-02 01:48

本文主要是介绍Lucene5学习之SpanQuery跨度查询,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

    SpanQuery下的子类有好几个,我就放一篇里集中说说。SpanQuery即跨度查询,首先要理解跨度这个概念,Lucene里跨度是用Spans这个类定义的,源码如下:

 

/** Expert: an enumeration of span matches.  Used to implement span searching.* Each span represents a range of term positions within a document.  Matches* are enumerated in order, by increasing document number, within that by* increasing start position and finally by increasing end position. */
public abstract class Spans {/** Move to the next match, returning true iff any such exists. */public abstract boolean next() throws IOException;/** Skips to the first match beyond the current, whose document number is* greater than or equal to <i>target</i>.* <p>The behavior of this method is <b>undefined</b> when called with* <code> target &le; current</code>, or after the iterator has exhausted.* Both cases may result in unpredicted behavior.* <p>Returns true iff there is such* a match.  <p>Behaves as if written: <pre class="prettyprint">*   boolean skipTo(int target) {*     do {*       if (!next())*         return false;*     } while (target > doc());*     return true;*   }* </pre>* Most implementations are considerably more efficient than that.*/public abstract boolean skipTo(int target) throws IOException;/** Returns the document number of the current match.  Initially invalid. */public abstract int doc();/** Returns the start position of the current match.  Initially invalid. */public abstract int start();/** Returns the end position of the current match.  Initially invalid. */public abstract int end();/*** Returns the payload data for the current span.* This is invalid until {@link #next()} is called for* the first time.* This method must not be called more than once after each call* of {@link #next()}. However, most payloads are loaded lazily,* so if the payload data for the current position is not needed,* this method may not be called at all for performance reasons. An ordered* SpanQuery does not lazy load, so if you have payloads in your index and* you do not want ordered SpanNearQuerys to collect payloads, you can* disable collection with a constructor option.<br>* <br>* Note that the return type is a collection, thus the ordering should not be relied upon.* <br/>* @lucene.experimental** @return a List of byte arrays containing the data of this payload, otherwise null if isPayloadAvailable is false* @throws IOException if there is a low-level I/O error*/// TODO: Remove warning after API has been finalizedpublic abstract Collection<byte[]> getPayload() throws IOException;/*** Checks if a payload can be loaded at this position.* <p/>* Payloads can only be loaded once per call to* {@link #next()}.** @return true if there is a payload available at this position that can be loaded*/public abstract boolean isPayloadAvailable() throws IOException;/*** Returns the estimated cost of this spans.* <p>* This is generally an upper bound of the number of documents this iterator* might match, but may be a rough heuristic, hardcoded value, or otherwise* completely inaccurate.*/public abstract long cost();
}

    跨度里包含了匹配Term的起始位置和结束位置信息以及跨度价值估算值以及payload信息等等。

 

   首先要说的就是SpanTermQuery,他和TermQuery用法很相似,唯一区别就是SapnTermQuery可以得到Term的span跨度信息,用法如下:

 

package com.yida.framework.lucene5.query;import java.io.IOException;import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.search.AutomatonQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.MultiTermQuery;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.spans.SpanQuery;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.automaton.Automata;
import org.apache.lucene.util.automaton.Automaton;
/*** SpanTermQuery用法测试* @author Lanxiaowei**/
public class SpanTermQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick red fox jumps over the sleepy cat", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);String queryString = "red";SpanQuery query = new SpanTermQuery(new Term("text",queryString));TopDocs results = searcher.search(query, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String path = document.get("text");System.out.println("text:" + path);}}
}

   SpanNearQuery:用来匹配两个Term之间的跨度的,即一个Term经过几个跨度可以到达另一个Term,slop为跨度因子,用来限制两个Term之间的最大跨度,不可能一个Term和另一个Term之间要经过十万八千个跨度才到达也算两者相近,这不符合常理。所以有个slop因子进行限制。还有一个inOrder参数要引起注意,它用来设置是否允许进行倒序跨度,什么意思?即TermA到TermB不一定是从左到右去匹配也可以从右到左,而从右到左就是倒序,inOrder为true即表示order(顺序)很重要不能倒序去匹配必须正向去匹配,false则反之。注意停用词不在slop统计范围内

    Slop的理解很重要:

    在默认情况下slop的值是0, 就相当于TermQuery的精确匹配, 通过设置slop参数(比如"one five"匹配"one two three four five"就需要slop=3,如果slop=2就无法得到结果。这里我们可以认为slope是单词移动得次数,可以左移或者右移。这里特别提 醒,PhraseQuery不保证前后单词的次序,在上面的例子中,"two one"就需要2个slop,也就是认为one 向左边移动2位, 就是能够匹配的”one two”如果是“five three one” 就需要slope=6才能匹配。

 

还有一个collectPayloads参数表示是否收集payload信息,关于payload后面再单独说。

 

    SpanNearQuery的构造函数如下:

 

public SpanNearQuery(SpanQuery[] clauses, int slop, boolean inOrder, boolean collectPayloads) {// copy clauses array into an ArrayListthis.clauses = new ArrayList<>(clauses.length);for (int i = 0; i < clauses.length; i++) {SpanQuery clause = clauses[i];if (field == null) {                               // check fieldfield = clause.getField();} else if (clause.getField() != null && !clause.getField().equals(field)) {throw new IllegalArgumentException("Clauses must have same field.");}this.clauses.add(clause);}this.collectPayloads = collectPayloads;this.slop = slop;this.inOrder = inOrder;}

   SpanNearQuery使用示例:

 

 

/*** SpanNearQuery测试* @author Lanxiaowei**/
public class SpanNearQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick red fox jumps over the sleepy cat", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);String queryStringStart = "dog";String queryStringEnd = "quick";SpanQuery queryStart = new SpanTermQuery(new Term("text",queryStringStart));SpanQuery queryEnd = new SpanTermQuery(new Term("text",queryStringEnd));SpanQuery spanNearQuery = new SpanNearQuery(new SpanQuery[] {queryStart,queryEnd}, 6, false, false);TopDocs results = searcher.search(spanNearQuery, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String path = document.get("text");System.out.println("text:" + path);}}
}

   示例中dog要到达quick需要经过6个跨度,需要从右至左倒序匹配,所以inOrder设置为false,如果设置为true会导致查询不出来数据。

 

   SpanNotQuery:使用场景是当使用SpanNearQuery时,如果两个Term从TermA到TermB有多种情况,即可能出现TermA或者TermB在索引中重复出现,则可能有多种情况,SpanNotQuery就是用来限制TermA和TermB之间不存在TermC,从而排除一些情况,实现更精确的控制。默认SpanNotQuery的构造函数是这样的:

 

/** Construct a SpanNotQuery matching spans from <code>include</code> which* have no overlap with spans from <code>exclude</code>.*/public SpanNotQuery(SpanQuery include, SpanQuery exclude) {this(include, exclude, 0, 0);}

 显然这里的第一个参数include应该是SpanNearQuery,第二个参数就是用来做排除的。

 

   SpanNotQuery另一个重载构造函数如下:

 

/** Construct a SpanNotQuery matching spans from <code>include</code> which* have no overlap with spans from <code>exclude</code> within * <code>dist</code> tokens of <code>include</code>. */public SpanNotQuery(SpanQuery include, SpanQuery exclude, int dist) {this(include, exclude, dist, dist);}

 它多加了一个dist参数,官方的解释是:Construct a SpanNotQuery matching spans from include which have no overlap with spans from exclude within dist tokens of include. 说白了就是,使用exclude限制以后匹配到以后,TermA和TermB之间间隔的字符长度做个限制,这就是dist的作用。

 

SpanNotQuery还有一个更复杂的构造函数重载:

 

/** Construct a SpanNotQuery matching spans from <code>include</code> which* have no overlap with spans from <code>exclude</code> within * <code>pre</code> tokens before or <code>post</code> tokens of <code>include</code>. */public SpanNotQuery(SpanQuery include, SpanQuery exclude, int pre, int post) {this.include = include;this.exclude = exclude;this.pre = (pre >=0) ? pre : 0;this.post = (post >= 0) ? post : 0;if (include.getField() != null && exclude.getField() != null && !include.getField().equals(exclude.getField()))throw new IllegalArgumentException("Clauses must have same field.");}

 最后一个post参数其实就是dist,pre参数就是限制exclude Term前面有几个字符。这样解释太抽象,用示例代码来说明吧:

 

 

package com.yida.framework.lucene5.query;import java.io.IOException;import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.spans.SpanNearQuery;
import org.apache.lucene.search.spans.SpanNotQuery;
import org.apache.lucene.search.spans.SpanQuery;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;/*** SpanNotQuery测试* @author Lanxiaowei**/
public class SpanNotQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick red fox jumps over the sleepy cat", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox quick gox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown adult slave nice fox winde felt testcase gox quick jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox quick jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);String queryStringStart = "dog";String queryStringEnd = "quick";String excludeString = "fox";SpanQuery queryStart = new SpanTermQuery(new Term("text",queryStringStart));SpanQuery queryEnd = new SpanTermQuery(new Term("text",queryStringEnd));SpanQuery excludeQuery = new SpanTermQuery(new Term("text",excludeString));SpanQuery spanNearQuery = new SpanNearQuery(new SpanQuery[] {queryStart,queryEnd}, 12, false, false);SpanNotQuery spanNotQuery = new SpanNotQuery(spanNearQuery, excludeQuery, 4,3);TopDocs results = searcher.search(spanNotQuery, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String path = document.get("text");System.out.println("text:" + path);}}
}

   示例代码意思就是查询dog和quick之间没有fox的索引文档,自己运行示例代码参悟吧。

 

   SpanOrQuery顾名思义就是把多个Span'Query用or连接起来,其实你也可以用BooleanQuery来代替SpanOrQuery,但SpanOrQuery会返回额外的Span跨度信息,它的构造函数如下:

 

SpanOrQuery(SpanQuery... clauses) 

    接收多个SpanQuery对象并用or连接起来,下面是SpanOrQuery示例代码:

 

   

package com.yida.framework.lucene5.query;import java.io.IOException;import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.spans.SpanNearQuery;
import org.apache.lucene.search.spans.SpanNotQuery;
import org.apache.lucene.search.spans.SpanOrQuery;
import org.apache.lucene.search.spans.SpanQuery;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;/*** SpanOrQuery测试* @author Lanxiaowei**/
public class SpanOrQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new TextField("text", "the quick brown fox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick red fox jumps over the sleepy cat", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox quick gox jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown adult slave nice fox winde felt testcase gox quick jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown adult sick slave nice fox winde felt testcase fox quick jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "the quick brown fox quick jumps over the lazy dog", Field.Store.YES));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);String queryStringStart = "dog";String queryStringEnd = "quick";String excludeString = "fox";String termString = "sick";SpanQuery queryStart = new SpanTermQuery(new Term("text",queryStringStart));SpanQuery queryEnd = new SpanTermQuery(new Term("text",queryStringEnd));SpanQuery excludeQuery = new SpanTermQuery(new Term("text",excludeString));SpanQuery spanNearQuery = new SpanNearQuery(new SpanQuery[] {queryStart,queryEnd}, 12, false, false);SpanNotQuery spanNotQuery = new SpanNotQuery(spanNearQuery, excludeQuery, 4,3);SpanQuery spanTermQuery = new SpanTermQuery(new Term("text",termString));SpanOrQuery spanOrQuery = new SpanOrQuery(spanNotQuery,spanTermQuery);TopDocs results = searcher.search(spanOrQuery, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String path = document.get("text");System.out.println("text:" + path);}}
}

   SpanMultiTermQueryWrapper:就是一个Query转换器,用于把MultiTermQuery包装转换成SpanQuery的,具体使用示例,我贴下官方API里提供的示例代码吧:

 

 

WildcardQuery wildcard = new WildcardQuery(new Term("field", "bro?n"));SpanQuery spanWildcard = new SpanMultiTermQueryWrapper<WildcardQuery>(wildcard);

 SpanPositionRangeQuery:这个query是用来限制匹配的情况是否分布在(start,end)这个区间内,区间索引从零开始计算,拿示例代码说话,

 

 

package com.yida.framework.lucene5.query;import java.io.IOException;import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.search.FuzzyQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.spans.SpanMultiTermQueryWrapper;
import org.apache.lucene.search.spans.SpanNearQuery;
import org.apache.lucene.search.spans.SpanNotQuery;
import org.apache.lucene.search.spans.SpanPositionRangeQuery;
import org.apache.lucene.search.spans.SpanQuery;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;/*** SpanPositionRangeQuery测试* @author Lanxiaowei**/
public class SpanPositionRangeQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new TextField("text", "quick brown fox", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "jumps over lazy broun dog", Field.Store.YES));writer.addDocument(doc);doc = new Document();doc.add(new TextField("text", "jumps over extremely very lazy broxn dog", Field.Store.YES));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);FuzzyQuery fq = new FuzzyQuery(new Term("text", "broan"));SpanQuery sfq = new SpanMultiTermQueryWrapper<FuzzyQuery>(fq);SpanPositionRangeQuery spanPositionRangeQuery = new SpanPositionRangeQuery(sfq, 3, 5);TopDocs results = searcher.search(spanPositionRangeQuery, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String path = document.get("text");System.out.println("text:" + path);}}
}

 稍微解释下上面的代码,首先呢,FuzzyQuery fq = new FuzzyQuery(new Term("text", "broan"));用来查询包含跟单词broan相似字符的索引文档,显然第一个索引文档不符合排除了一个,然后呢,我们new了一个SpanQuery包装器Wrapper,把FuzzyQuery转换成了SpanQuery,然后使用SpanPositionRangeQuery对匹配到的2种情况的落放的位置进行限制即跟broan相似的单词必须分布在(3,5)这个区间内,显然第3个索引文档是分布在(3,6)这个区间内,所以第3个索引文档被排除了,最后只返回第2个索引文档。

 

    SpanPositionRangeQuery还有个子类SpanFirstQuery,其实SpanFirstQuery只不过是把SpanPositionRangeQuery构造函数里的start参数值设置为0,仅此而已,所以不用多说,你也懂的,它的构造函数如下:

SpanFirstQuery(SpanQuery match, int end) 
Construct a SpanFirstQuery matching spans in match whose end position is less than or equal to end.

 这也就是为什么只有一个end,没有start,因为start默认为零,看源码:


 SpanFirstQuery示例我就不提供了,略过。

   最后一个要说的就是FieldMaskingSpanQuery,它用于在多个域之间查询,即把另一个域看作某个域,从而看起来就像在同一个域里查询,因为Lucene默认某个条件只能作用在单个域上,不支持跨域查询只能在同一个域里查询,所以有了FieldMaskingSpanQuery,,下面是示例代码:

package com.yida.framework.lucene5.query;import java.io.IOException;import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.spans.FieldMaskingSpanQuery;
import org.apache.lucene.search.spans.SpanNearQuery;
import org.apache.lucene.search.spans.SpanQuery;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;/*** FieldMaskingSpanQuery测试* @author Lanxiaowei**/
public class FieldMaskingSpanQueryTest {public static void main(String[] args) throws IOException {Directory dir = new RAMDirectory();Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(analyzer);iwc.setOpenMode(OpenMode.CREATE);IndexWriter writer = new IndexWriter(dir, iwc);Document doc = new Document();doc.add(new Field("teacherid", "1", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentfirstname", "james", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentsurname", "jones", Field.Store.YES, Field.Index.NOT_ANALYZED));writer.addDocument(doc);//teacher2doc = new Document();doc.add(new Field("teacherid", "2", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentfirstname", "james", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentsurname", "smith", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentfirstname", "sally", Field.Store.YES, Field.Index.NOT_ANALYZED));doc.add(new Field("studentsurname", "jones", Field.Store.YES, Field.Index.NOT_ANALYZED));writer.addDocument(doc);writer.close();IndexReader reader = DirectoryReader.open(dir);IndexSearcher searcher = new IndexSearcher(reader);SpanQuery q1  = new SpanTermQuery(new Term("studentfirstname", "james"));SpanQuery q2  = new SpanTermQuery(new Term("studentsurname", "jones"));SpanQuery q2m = new FieldMaskingSpanQuery(q2, "studentfirstname");Query query = new SpanNearQuery(new SpanQuery[]{q1, q2m}, -1, false);TopDocs results = searcher.search(query, null, 100);ScoreDoc[] scoreDocs = results.scoreDocs;for (int i = 0; i < scoreDocs.length; ++i) {//System.out.println(searcher.explain(query, scoreDocs[i].doc));int docID = scoreDocs[i].doc;Document document = searcher.doc(docID);String teacherid = document.get("teacherid");System.out.println("teacherid:" + teacherid);}}
}

   OK,SpanQuery就说这么多,接下来要说的就是PhraseQuery。

 

   如果你还有什么问题请加我Q-Q:7-3-6-0-3-1-3-0-5,

或者加裙
一起交流学习!

这篇关于Lucene5学习之SpanQuery跨度查询的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/327269

相关文章

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

学习hash总结

2014/1/29/   最近刚开始学hash,名字很陌生,但是hash的思想却很熟悉,以前早就做过此类的题,但是不知道这就是hash思想而已,说白了hash就是一个映射,往往灵活利用数组的下标来实现算法,hash的作用:1、判重;2、统计次数;

活用c4d官方开发文档查询代码

当你问AI助手比如豆包,如何用python禁止掉xpresso标签时候,它会提示到 这时候要用到两个东西。https://developers.maxon.net/论坛搜索和开发文档 比如这里我就在官方找到正确的id描述 然后我就把参数标签换过来

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]

【机器学习】高斯过程的基本概念和应用领域以及在python中的实例

引言 高斯过程(Gaussian Process,简称GP)是一种概率模型,用于描述一组随机变量的联合概率分布,其中任何一个有限维度的子集都具有高斯分布 文章目录 引言一、高斯过程1.1 基本定义1.1.1 随机过程1.1.2 高斯分布 1.2 高斯过程的特性1.2.1 联合高斯性1.2.2 均值函数1.2.3 协方差函数(或核函数) 1.3 核函数1.4 高斯过程回归(Gauss

【学习笔记】 陈强-机器学习-Python-Ch15 人工神经网络(1)sklearn

系列文章目录 监督学习:参数方法 【学习笔记】 陈强-机器学习-Python-Ch4 线性回归 【学习笔记】 陈强-机器学习-Python-Ch5 逻辑回归 【课后题练习】 陈强-机器学习-Python-Ch5 逻辑回归(SAheart.csv) 【学习笔记】 陈强-机器学习-Python-Ch6 多项逻辑回归 【学习笔记 及 课后题练习】 陈强-机器学习-Python-Ch7 判别分析 【学

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识

线性代数|机器学习-P36在图中找聚类

文章目录 1. 常见图结构2. 谱聚类 感觉后面几节课的内容跨越太大,需要补充太多的知识点,教授讲得内容跨越较大,一般一节课的内容是书本上的一章节内容,所以看视频比较吃力,需要先预习课本内容后才能够很好的理解教授讲解的知识点。 1. 常见图结构 假设我们有如下图结构: Adjacency Matrix:行和列表示的是节点的位置,A[i,j]表示的第 i 个节点和第 j 个