Unique nbest list with OpenFst

Wrapper function to generate a unique n-best list with OpenFst’s ShortestPath algorithm

template<class Arc>
void UniqueNbest(const Fst<Arc>& fst, int n, MutableFst<Arc>* ofst) {
  VectorFst<Arc> ifst(fst);
  Project(&ifst, PROJECT_OUTPUT);
  RmEpsilon(&ifst);
  vector<typename Arc::Weight> d;
  typedef AutoQueue<typename Arc::StateId> Q;
  AnyArcFilter<Arc> filter;
  Q q(ifst, &d, filter);
  ShortestPathOptions<Arc, Q, AnyArcFilter<Arc> > opts(&q, filter);
  opts.nshortest = n;
  opts.unique = true;
  ShortestPath(ifst, ofst, &d, opts);
}

Upcoming ICASSP 2014 Paper Titles

Ths year’s ICASSP accepted paper list is viewable in the technical program. Neural networks are a huge force in speech recognition and this conferences has three sessions just on deep neural networks.
In this ICASSP there are many interesting titles about recurrent neural networks for non acoustic modeling and a few decoding related papers. Here is a list of the upcoming paper that I’m interested in so far.

  • Real-time one-pass decoding with recurrent neural network language model for speech recognition
    Takaaki Hori, Yotaro Kubo, Atsushi Nakamura (NTT Corporation, Japan)

  • Cache based Recurrent Neural Network Language Model Inference for First Pass Speech Recognition
    Zhiheng Huang, Geoffery Zweig, Benoit Dumoulin (Microsoft, USA)

  • Contextual Domain Classification in Spoken Language Understanding Systems Using Recurrent Neural Network
    Ruhi Sarikaya (Microsoft, USA)

  • ASR Error Detection using Recurrent Neural Network Language Model and Complementary ASR
    Yik-Cheung Tam, Yun Lei, ing Zheng, Wen Wang (Google, USA and SRI International, USA)

  • Recurrent Conditional Random Field for Language Understanding
    Kaisheng Yao Baolin Peng, Geoffery Zweig, Dong Yu, Xiaolong Li, Feng Gao (Microsoft, P.R. China)

  • Efficient Lattice Rescoring Using Recurrent Neural Network Language Models
    Xunying Liu, Yongqiang Wang, Xie Chen, Mark Gales, Phil Woodland

  • Phone sequence modeling with recurrent neural networks
    Nicolas Boulanger-Lewandowski, Jasha Droppo, Mike Seltzer, Dong Yu (University of Montreal, Canada and Microsoft Research, USA)

  • Translating TED Speeches by Recurrent Neural Network based Translation Model
    Youzheng Wu, Hu Xinhui, Chiori Hori (NICT, Japan)

  • Reshaping Deep Neural Network for Fast Decoding by Node-pruning
    He Tianxing, Fan YuChen, Yanmin Qian, Tan Tian, Kai Yu (Shanghai Jiao Tong University, P.R. China

  • Accelerating Large Vocabulary Continuous Speech Recognition on Heterogeneous CPU-­GPU Platforms
    Jungsuk Kim, Ian Lane (Carnegie Mellon University, USA)

  • Progress in Dynamic Network Decoding
    David Nolden, Hagen Soltau, Hermann Ney (IBM and RWTH Aachen)

  • Gradient-Free Decoding Parameter Optimization on Automatic Speech Recognition
    Thach Le Nguyen, Daniel Stein, Michael Stadtschnitzer (IAIS Fraunhofer & Fraunhofer, Germany)

  • Multi-Stream Combination for LVCSR and Keyword Search on GPU-Accelerated Platform
    Wonkyum Lee, Jungsuk Kim, Ian Lane (Carnegie Mellon University, USA)

  • Accurate client-server based speech recognition keeping personal data on the client
    Munir Georges, Stephan Kanthak, Dietrich Klakow (Nuance, Germany and Saarland University, Germany)