Examples of Hit

  • es.mahulo.battleship.model.Hit
  • net.paoding.analysis.dictionary.Hit
    Hit是检索字典时返回的结果。检索字典时,总是返回一个非空的Hit对象表示可能的各种情况。

    Hit对象包含2类判断信息:

  • 要检索的词语是否存在于词典中: {@link #isHit()}
  • 词典是否含有以给定字符串开头的其他词语: {@link #isUnclosed()}

  • 如果上面2个信息都是否定的,则 {@link #isUndefined()}返回true,否则返回false.

    如果 {@link #isHit()}返回true,则 {@link #getWord()}返回查找结果, {@link #getNext()}返回下一个词语。
    如果 {@link #isHit()}返回false,但 {@link #isUnclosed()}返回true, {@link #getNext()}返回以所查询词语开头的位置最靠前的词语。

    @author Zhiliang Wang [qieqie.wang@gmail.com] @see Dictionary @see BinaryDictionary @see HashBinaryDictionary @since 1.0

  • net.sf.katta.lib.lucene.Hit
    Note: this class has a natural ordering that is inconsistent with equals.
  • org.apache.camel.processor.lucene.support.Hit
  • org.apache.lucene.search.Hit
    Wrapper used by {@link HitIterator} to provide a lazily loaded hitfrom {@link Hits}. @author Jeremy Rayner
  • org.apache.nutch.searcher.Hit
    A document which matched a query in an index.
  • org.encuestame.persistence.domain.Hit
    Hits. @author Morales, Diana Paola paolaATencuestame.org @since September 08, 2011
  • org.jayasoft.woj.common.model.search.Hit
  • org.opensolaris.opengrok.egrok.model.Hit
  • org.opensolaris.opengrok.search.Hit
    The hit class represents a single search hit @author Trond Norbye
  • org.vosao.search.Hit
  • org.watermint.sourcecolon.org.opensolaris.opengrok.search.Hit
    The hit class represents a single search hit @author Trond Norbye
  • org.wltea.analyzer.dic.Hit
    IK Analyzer v3.2 表示词典检索的命中结果 @author 林良益

  • Examples of org.jayasoft.woj.common.model.search.Hit

            int i;
            int countModuleHit = 0;
            for (i = 0; i < hits.length() && results.size() < RESULTS_PER_PAGE; i++) {
              Document doc = hits.doc(i);
              String key = doc.get("visibility")+"/"+doc.get("organisation")+"/"+doc.get("module")+"/"+doc.get("fqcn");
              Hit hit = (Hit) h.get(key);
              if (hit == null) {
                hit = new Hit(doc.get("visibility"), doc.get("organisation"), doc.get("module"), doc.get("fqcn"), hits.score(i));
                h.put(key, hit);
                if (countModuleHit >= startIndex) {
                  results.add(hit);
                }
                countModuleHit++;
              }
              // not stricly necessary, but sometimes lucene doesn't find the revisions in the second step...
              hit.addRevision(doc.get("path"), doc.get("revision"));
            }
            Long nextResults =  i == hits.length()?null:new Long(startIndex+RESULTS_PER_PAGE);
            Long previousResults = startIndex == 0?null:new Long(startIndex-RESULTS_PER_PAGE);
           
            // then we fill in revisions
            for (Iterator iter = results.iterator(); iter.hasNext();) {
          Hit hit = (Hit) iter.next();
          hits = is.search(query, new QueryFilter(queryParser.parse(
              "organisation:"+hit.getOrganisation()
              +" AND module:"+hit.getModule()
              +" AND fqcn:"+hit.getClassname()
          )));
          for (i = 0; i < hits.length(); i++) {
                  Document doc = hits.doc(i);
                  hit.addRevision(doc.get("path"), doc.get("revision"));
          }
        }
           
            is.close();
    View Full Code Here

    Examples of org.opensolaris.opengrok.egrok.model.Hit

        JSONArray array = (JSONArray) results.get("results");

        List<Hit> resultList = new ArrayList<>();
        for (Object obj : array) {
          JSONObject result = (JSONObject) obj;
          Hit hit = new Hit(result);
          if (hit.isValid()) {
            resultList.add(hit);
          }
        }
        return resultList;
      }
    View Full Code Here

    Examples of org.opensolaris.opengrok.search.Hit

                                }
                                int end = tokens.getMatchEnd();
                                if (out == null) {
                                    StringBuilder sb = new StringBuilder();
                                    writeMatch(sb, line, start, end, true,path,wcontext,nrev,rev);
                                    hits.add(new Hit(path, sb.toString(), "", false, false));
                                } else {
                                    writeMatch(out, line, start, end, false,path,wcontext,nrev,rev);
                                }
                                matchedLines++;
                                break;
    View Full Code Here

    Examples of org.vosao.search.Hit

              String text = StrUtil.extractSearchTextFromHTML(
                  content.getContent());
              if (text.length() > textSize) {
                text = text.substring(0, textSize);
              }
              result.add(new Hit(page, text, language));
            }
          }
          else {
            logger.error("Page not found " + pageId + ". Rebuild index.");
          }
    View Full Code Here

    Examples of org.watermint.sourcecolon.org.opensolaris.opengrok.search.Hit

                                 * desc[3] is matching line;
                                 */
                                String[] desc = {tag.symbol, Integer.toString(tag.line), tag.type, tag.text,};
                                if (in == null) {
                                    if (out == null) {
                                        Hit hit = new Hit(path, Util.htmlize(desc[3]).replace(desc[0], "<strong>" + desc[0] + "</strong>"), desc[1], false, alt);
                                        hits.add(hit);
                                        anything = true;
                                    } else {
                                        out.write("<a href=\"");
                                        out.write(urlPrefixE);
    View Full Code Here

    Examples of org.wltea.analyzer.dic.Hit

              }
            }
          }
         
          //处理以input为开始的一个新hit
          Hit hit = Dictionary.matchInMainDict(segmentBuff, context.getCursor() , 1);
          if(hit.isMatch()){//匹配成词
            //判断是否有不可识别的词段
            if(context.getCursor() > doneIndex + 1){
              //输出并处理从doneIndex+1 到 context.getCursor()- 1之间的未知
              processUnknown(segmentBuff , context , doneIndex + 1 , context.getCursor()- 1);
            }
            //输出当前的词
            Lexeme newLexeme = new Lexeme(context.getBuffOffset() , context.getCursor() , 1 , Lexeme.TYPE_CJK_NORMAL);
            context.addLexeme(newLexeme);
            //更新doneIndex,标识已处理
            if(doneIndex < context.getCursor()){
              doneIndex = context.getCursor();
            }

            if(hit.isPrefix()){//同时也是前缀
              //向词段队列增加新的Hit
              hitList.add(hit);
            }
           
          }else if(hit.isPrefix()){//前缀,未匹配成词
            //向词段队列增加新的Hit
            hitList.add(hit);
           
          }else if(hit.isUnmatch()){//不匹配,当前的input不是词,也不是词前缀,将其视为分割性的字符
            if(doneIndex >= context.getCursor()){
              //当前不匹配的字符已经被处理过了,不需要再processUnknown
              return;
            }
           
            //输出从doneIndex到当前字符(含当前字符)之间的未知词
            processUnknown(segmentBuff , context , doneIndex + 1 , context.getCursor());
            //更新doneIndex,标识已处理
            doneIndex = context.getCursor();
          }
         
        }else {//输入的不是中文(CJK)字符
          if(hitList.size() > 0
              &&  doneIndex < context.getCursor() - 1){
            for(Hit hit : hitList){
              //判断是否有不可识别的词段
              if(doneIndex < hit.getEnd()){
                //输出并处理从doneIndex+1 到 seg.end之间的未知词段
                processUnknown(segmentBuff , context , doneIndex + 1 , hit.getEnd());
              }
            }
          }
          //清空词段队列
          hitList.clear();
          //更新doneIndex,标识已处理
          if(doneIndex < context.getCursor()){
            doneIndex = context.getCursor();
          }
        }
       
        //缓冲区结束临界处理
        if(context.getCursor() == context.getAvailable() - 1){ //读取缓冲区结束的最后一个字符     
          if( hitList.size() > 0 //队列中还有未处理词段
            && doneIndex < context.getCursor()){//最后一个字符还未被输出过
            for(Hit hit : hitList){
              //判断是否有不可识别的词段
              if(doneIndex < hit.getEnd() ){
                //输出并处理从doneIndex+1 到 seg.end之间的未知词段
                processUnknown(segmentBuff , context , doneIndex + 1 , hit.getEnd());
              }
            }
          }
          //清空词段队列
          hitList.clear();;
    View Full Code Here

    Examples of org.wltea.analyzer.dic.Hit

       * @param uEnd 终止位置
       */
      private void processUnknown(char[] segmentBuff , Context context , int uBegin , int uEnd){
        Lexeme newLexeme = null;
       
        Hit hit = Dictionary.matchInPrepDict(segmentBuff, uBegin, 1);   
        if(hit.isUnmatch()){//不是副词或介词     
          if(uBegin > 0){//处理姓氏
            hit = Dictionary.matchInSurnameDict(segmentBuff, uBegin - 1 , 1);
            if(hit.isMatch()){
              //输出姓氏
              newLexeme = new Lexeme(context.getBuffOffset() , uBegin - 1 , 1 , Lexeme.TYPE_CJK_SN);
              context.addLexeme(newLexeme);   
            }
          }     
        }
       
        //以单字输出未知词段
        for(int i = uBegin ; i <= uEnd ; i++){
          newLexeme = new Lexeme(context.getBuffOffset() , i , , Lexeme.TYPE_CJK_UNKNOWN);
          context.addLexeme(newLexeme);   
        }
       
        hit = Dictionary.matchInPrepDict(segmentBuff, uEnd, 1);
        if(hit.isUnmatch()){//不是副词或介词
          int length = 1;
          while(uEnd < context.getAvailable() - length){//处理后缀词
            hit = Dictionary.matchInSuffixDict(segmentBuff, uEnd + 1 , length);
            if(hit.isMatch()){
              //输出后缀
              newLexeme = new Lexeme(context.getBuffOffset() , uEnd + , length , Lexeme.TYPE_CJK_SF);
              context.addLexeme(newLexeme);
              break;
            }
            if(hit.isUnmatch()){
              break;
            }
            length++;
          }
        }   
    View Full Code Here

    Examples of org.wltea.analyzer.dic.Hit

       * 处理中文量词
       * @param segmentBuff
       * @param context
       */
      private void processCount(char[] segmentBuff , Context context){
        Hit hit = null;

        if(countStart == -1){
          hit = Dictionary.matchInQuantifierDict(segmentBuff , context.getCursor() , 1);
        }else{
          hit = Dictionary.matchInQuantifierDict(segmentBuff , countStart , context.getCursor() - countStart + 1);
        }
       
        if(hit != null){
          if(hit.isPrefix()){
            if(countStart == -1){
              //设置量词的开始
              countStart = context.getCursor();
            }
          }
         
          if(hit.isMatch()){
            if(countStart == -1){
              countStart = context.getCursor();
            }
            //设置量词可能的结束
            countEnd = context.getCursor();
            //输出可能存在的量词
            outputCountLexeme(context);
          }
         
          if(hit.isUnmatch()){
            if(countStart != -1){
              //重置量词状态
              countStart = -1;
              countEnd = -1;
            }
    View Full Code Here

    Examples of org.wltea.analyzer.dic.Hit

          e1.printStackTrace();
        }
       
        System.out.println(new Date() + " begin march");
        long begintime = System.currentTimeMillis();
        Hit hit = null;
        int umCount = 0;
        int mCount = 0;
        for(String word : allWords){
          char[] chars = word.toCharArray();
          hit = _root_.match(chars , 0, chars.length);
          if(hit.isUnmatch()){
            //System.out.println(word);
            umCount++;
          }else{
            mCount++;
            //System.out.println(mCount + " : " + word);
    View Full Code Here

    Examples of org.wltea.analyzer.dic.Hit

            e1.printStackTrace();
          }
         
          System.out.println(new Date() + " begin march");
          long begintime = System.currentTimeMillis();
          Hit hit = null;
          int umCount = 0;
          int mCount = 0;
          for(String word : allWords){     
            hit = Dictionary.matchInMainDict(word.toCharArray(), 0, word.length());
            if(hit.isUnmatch()){
              System.out.println(word);
              umCount++;
            }else{
              mCount++;
            }
    View Full Code Here
    TOP
    Copyright © 2018 www.massapi.com. All rights reserved.
    All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.