Refactor flutter to support Android (#1072)

This commit is contained in:
Fangjun Kuang
2024-07-04 10:49:09 +08:00
committed by GitHub
parent 125bb9ff99
commit b502116068
110 changed files with 4547 additions and 32 deletions

138
flutter/.gitignore vendored Normal file
View File

@@ -0,0 +1,138 @@
# Do not remove or rename entries in this file, only add new ones
# See https://github.com/flutter/flutter/issues/128635 for more context.
# Miscellaneous
*.class
*.lock
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# Visual Studio Code related
.classpath
.project
.settings/
.vscode/*
# Flutter repo-specific
/bin/cache/
/bin/internal/bootstrap.bat
/bin/internal/bootstrap.sh
/bin/mingit/
/dev/benchmarks/mega_gallery/
/dev/bots/.recipe_deps
/dev/bots/android_tools/
/dev/devicelab/ABresults*.json
/dev/docs/doc/
/dev/docs/api_docs.zip
/dev/docs/flutter.docs.zip
/dev/docs/lib/
/dev/docs/pubspec.yaml
/dev/integration_tests/**/xcuserdata
/dev/integration_tests/**/Pods
/packages/flutter/coverage/
version
analysis_benchmark.json
# packages file containing multi-root paths
.packages.generated
# Flutter/Dart/Pub related
**/doc/api/
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
**/generated_plugin_registrant.dart
.packages
.pub-preload-cache/
.pub-cache/
.pub/
build/
flutter_*.png
linked_*.ds
unlinked.ds
unlinked_spec.ds
# Android related
**/android/**/gradle-wrapper.jar
.gradle/
**/android/captures/
**/android/gradlew
**/android/gradlew.bat
**/android/local.properties
**/android/**/GeneratedPluginRegistrant.java
**/android/key.properties
*.jks
# iOS/XCode related
**/ios/**/*.mode1v3
**/ios/**/*.mode2v3
**/ios/**/*.moved-aside
**/ios/**/*.pbxuser
**/ios/**/*.perspectivev3
**/ios/**/*sync/
**/ios/**/.sconsign.dblite
**/ios/**/.tags*
**/ios/**/.vagrant/
**/ios/**/DerivedData/
**/ios/**/Icon?
**/ios/**/Pods/
**/ios/**/.symlinks/
**/ios/**/profile
**/ios/**/xcuserdata
**/ios/.generated/
**/ios/Flutter/.last_build_id
**/ios/Flutter/App.framework
**/ios/Flutter/Flutter.framework
**/ios/Flutter/Flutter.podspec
**/ios/Flutter/Generated.xcconfig
**/ios/Flutter/ephemeral
**/ios/Flutter/app.flx
**/ios/Flutter/app.zip
**/ios/Flutter/flutter_assets/
**/ios/Flutter/flutter_export_environment.sh
**/ios/ServiceDefinitions.json
**/ios/Runner/GeneratedPluginRegistrant.*
# macOS
**/Flutter/ephemeral/
**/Pods/
**/macos/Flutter/GeneratedPluginRegistrant.swift
**/macos/Flutter/ephemeral
**/xcuserdata/
# Windows
**/windows/flutter/generated_plugin_registrant.cc
**/windows/flutter/generated_plugin_registrant.h
**/windows/flutter/generated_plugins.cmake
# Linux
**/linux/flutter/generated_plugin_registrant.cc
**/linux/flutter/generated_plugin_registrant.h
**/linux/flutter/generated_plugins.cmake
# Coverage
coverage/
# Symbols
app.*.symbols
# Exceptions to above rules.
!**/ios/**/default.mode1v3
!**/ios/**/default.mode2v3
!**/ios/**/default.pbxuser
!**/ios/**/default.perspectivev3
!/packages/flutter_tools/test/data/dart_dependencies_test/**/.packages
!/dev/ci/**/Gemfile.lock
!.vscode/settings.json

10
flutter/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Introduction
This directory contains the source code of the flutter
package [sherpa-onnx](https://github.com/k2-fsa/sherpa-onnx)
Caution: You are not expected to use this directory directly.
This directory is for developers only.
For common users, please use our package at <https://pub.dev/packages/sherpa_onnx>

34
flutter/notes.md Normal file
View File

@@ -0,0 +1,34 @@
# Introduction
This file keeps some notes about how packages in this directory
are created.
1. Create `sherpa_onnx`.
```bash
flutter create --template plugin sherpa_onnx
```
2. Create `sherpa_onnx_macos`
```bash
flutter create --template plugin_ffi --platforms macos sherpa_onnx_macos
```
3. Create `sherpa_onnx_linux
```bash
flutter create --template plugin_ffi --platforms linux sherpa_onnx_linux
```
4. Create `sherpa_onnx_windows
```bash
flutter create --template plugin_ffi --platforms linux sherpa_onnx_windows
```
5. Create `sherpa_onnx_android
```bash
flutter create --template plugin_ffi --platforms android --org com.k2fsa.sherpa.onnx sherpa_onnx_android
```

29
flutter/sherpa_onnx/.gitignore vendored Normal file
View File

@@ -0,0 +1,29 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
build/

View File

@@ -0,0 +1,27 @@
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: "5dcb86f68f239346676ceb1ed1ea385bd215fba1"
channel: "stable"
project_type: plugin
# Tracks metadata for the flutter migrate command
migration:
platforms:
- platform: root
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
# User provided section
# List of Local paths (relative to this file) that should be
# ignored by the migrate tool.
#
# Files that are not part of the templates will be ignored by default.
unmanaged_files:
- 'lib/main.dart'
- 'ios/Runner.xcodeproj/project.pbxproj'

View File

@@ -0,0 +1,35 @@
## 1.10.7
* Support Android
## 1.10.2
* Fix passing C# string to C++
## 1.10.1
* Enable to stop TTS generation
## 1.10.0
* Add inverse text normalization
## 1.9.30
* Add TTS
## 1.9.29
* Publish with CI
## 0.0.3
* Fix path separator on Windows.
## 0.0.2
* Support specifying lib path.
## 0.0.1
* Initial release.

View File

@@ -0,0 +1,3 @@
# sherpa_onnx
Please see <https://github.com/k2-fsa/sherpa-onnx>

View File

@@ -0,0 +1,4 @@
include: package:flutter_lints/flutter.yaml
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

43
flutter/sherpa_onnx/example/.gitignore vendored Normal file
View File

@@ -0,0 +1,43 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
**/doc/api/
**/ios/Flutter/.last_build_id
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
.pub-cache/
.pub/
/build/
# Symbolication related
app.*.symbols
# Obfuscation related
app.*.map.json
# Android Studio will place build artifacts here
/android/app/debug
/android/app/profile
/android/app/release

View File

@@ -0,0 +1,9 @@
# Introduction
Please find examples at
https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples
and
https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples

View File

@@ -0,0 +1,20 @@
# sherpa-onnx app example
See also <https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples>
## Streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/streaming-asr
## Non-streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/non-streaming-asr
## Text to speech (TTS)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/tts
## Voice activity detection (VAD)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/vad

View File

@@ -0,0 +1,56 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:io';
import 'dart:ffi';
export 'src/feature_config.dart';
export 'src/offline_recognizer.dart';
export 'src/offline_stream.dart';
export 'src/online_recognizer.dart';
export 'src/online_stream.dart';
export 'src/speaker_identification.dart';
export 'src/tts.dart';
export 'src/vad.dart';
export 'src/wave_reader.dart';
export 'src/wave_writer.dart';
import 'src/sherpa_onnx_bindings.dart';
String? _path;
// see also
// https://github.com/flutter/codelabs/blob/main/ffigen_codelab/step_05/lib/ffigen_app.dart
final DynamicLibrary _dylib = () {
if (Platform.isIOS) {
throw UnsupportedError('Unknown platform: ${Platform.operatingSystem}');
}
if (Platform.isMacOS) {
if (_path == null) {
return DynamicLibrary.open('libsherpa-onnx-c-api.dylib');
} else {
return DynamicLibrary.open('$_path/libsherpa-onnx-c-api.dylib');
}
}
if (Platform.isAndroid || Platform.isLinux) {
if (_path == null) {
return DynamicLibrary.open('libsherpa-onnx-c-api.so');
} else {
return DynamicLibrary.open('$_path/libsherpa-onnx-c-api.so');
}
}
if (Platform.isWindows) {
if (_path == null) {
return DynamicLibrary.open('sherpa-onnx-c-api.dll');
} else {
return DynamicLibrary.open('$_path\\sherpa-onnx-c-api.dll');
}
}
throw UnsupportedError('Unknown platform: ${Platform.operatingSystem}');
}();
void initBindings([String? p]) {
_path ??= p;
SherpaOnnxBindings.init(_dylib);
}

View File

@@ -0,0 +1,13 @@
// Copyright (c) 2024 Xiaomi Corporation
class FeatureConfig {
const FeatureConfig({this.sampleRate = 16000, this.featureDim = 80});
@override
String toString() {
return 'FeatureConfig(sampleRate: $sampleRate, featureDim: $featureDim)';
}
final int sampleRate;
final int featureDim;
}

View File

@@ -0,0 +1,302 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:convert';
import 'dart:ffi';
import 'package:ffi/ffi.dart';
import './feature_config.dart';
import './offline_stream.dart';
import './sherpa_onnx_bindings.dart';
class OfflineTransducerModelConfig {
const OfflineTransducerModelConfig({
this.encoder = '',
this.decoder = '',
this.joiner = '',
});
@override
String toString() {
return 'OfflineTransducerModelConfig(encoder: $encoder, decoder: $decoder, joiner: $joiner)';
}
final String encoder;
final String decoder;
final String joiner;
}
class OfflineParaformerModelConfig {
const OfflineParaformerModelConfig({this.model = ''});
@override
String toString() {
return 'OfflineParaformerModelConfig(model: $model)';
}
final String model;
}
class OfflineNemoEncDecCtcModelConfig {
const OfflineNemoEncDecCtcModelConfig({this.model = ''});
@override
String toString() {
return 'OfflineNemoEncDecCtcModelConfig(model: $model)';
}
final String model;
}
class OfflineWhisperModelConfig {
const OfflineWhisperModelConfig(
{this.encoder = '',
this.decoder = '',
this.language = '',
this.task = '',
this.tailPaddings = -1});
@override
String toString() {
return 'OfflineWhisperModelConfig(encoder: $encoder, decoder: $decoder, language: $language, task: $task, tailPaddings: $tailPaddings)';
}
final String encoder;
final String decoder;
final String language;
final String task;
final int tailPaddings;
}
class OfflineTdnnModelConfig {
const OfflineTdnnModelConfig({this.model = ''});
@override
String toString() {
return 'OfflineTdnnModelConfig(model: $model)';
}
final String model;
}
class OfflineLMConfig {
const OfflineLMConfig({this.model = '', this.scale = 1.0});
@override
String toString() {
return 'OfflineLMConfig(model: $model, scale: $scale)';
}
final String model;
final double scale;
}
class OfflineModelConfig {
const OfflineModelConfig({
this.transducer = const OfflineTransducerModelConfig(),
this.paraformer = const OfflineParaformerModelConfig(),
this.nemoCtc = const OfflineNemoEncDecCtcModelConfig(),
this.whisper = const OfflineWhisperModelConfig(),
this.tdnn = const OfflineTdnnModelConfig(),
required this.tokens,
this.numThreads = 1,
this.debug = true,
this.provider = 'cpu',
this.modelType = '',
this.modelingUnit = '',
this.bpeVocab = '',
this.telespeechCtc = '',
});
@override
String toString() {
return 'OfflineModelConfig(transducer: $transducer, paraformer: $paraformer, nemoCtc: $nemoCtc, whisper: $whisper, tdnn: $tdnn, tokens: $tokens, numThreads: $numThreads, debug: $debug, provider: $provider, modelType: $modelType, modelingUnit: $modelingUnit, bpeVocab: $bpeVocab, telespeechCtc: $telespeechCtc)';
}
final OfflineTransducerModelConfig transducer;
final OfflineParaformerModelConfig paraformer;
final OfflineNemoEncDecCtcModelConfig nemoCtc;
final OfflineWhisperModelConfig whisper;
final OfflineTdnnModelConfig tdnn;
final String tokens;
final int numThreads;
final bool debug;
final String provider;
final String modelType;
final String modelingUnit;
final String bpeVocab;
final String telespeechCtc;
}
class OfflineRecognizerConfig {
const OfflineRecognizerConfig({
this.feat = const FeatureConfig(),
required this.model,
this.lm = const OfflineLMConfig(),
this.decodingMethod = 'greedy_search',
this.maxActivePaths = 4,
this.hotwordsFile = '',
this.hotwordsScore = 1.5,
this.ruleFsts = '',
this.ruleFars = '',
});
@override
String toString() {
return 'OfflineRecognizerConfig(feat: $feat, model: $model, lm: $lm, decodingMethod: $decodingMethod, maxActivePaths: $maxActivePaths, hotwordsFile: $hotwordsFile, hotwordsScore: $hotwordsScore, ruleFsts: $ruleFsts, ruleFars: $ruleFars)';
}
final FeatureConfig feat;
final OfflineModelConfig model;
final OfflineLMConfig lm;
final String decodingMethod;
final int maxActivePaths;
final String hotwordsFile;
final double hotwordsScore;
final String ruleFsts;
final String ruleFars;
}
class OfflineRecognizerResult {
OfflineRecognizerResult(
{required this.text, required this.tokens, required this.timestamps});
@override
String toString() {
return 'OfflineRecognizerResult(text: $text, tokens: $tokens, timestamps: $timestamps)';
}
final String text;
final List<String> tokens;
final List<double> timestamps;
}
class OfflineRecognizer {
OfflineRecognizer._({required this.ptr, required this.config});
void free() {
SherpaOnnxBindings.destroyOfflineRecognizer?.call(ptr);
ptr = nullptr;
}
/// The user is responsible to call the OfflineRecognizer.free()
/// method of the returned instance to avoid memory leak.
factory OfflineRecognizer(OfflineRecognizerConfig config) {
final c = calloc<SherpaOnnxOfflineRecognizerConfig>();
c.ref.feat.sampleRate = config.feat.sampleRate;
c.ref.feat.featureDim = config.feat.featureDim;
// transducer
c.ref.model.transducer.encoder =
config.model.transducer.encoder.toNativeUtf8();
c.ref.model.transducer.decoder =
config.model.transducer.decoder.toNativeUtf8();
c.ref.model.transducer.joiner =
config.model.transducer.joiner.toNativeUtf8();
// paraformer
c.ref.model.paraformer.model = config.model.paraformer.model.toNativeUtf8();
// nemoCtc
c.ref.model.nemoCtc.model = config.model.nemoCtc.model.toNativeUtf8();
// whisper
c.ref.model.whisper.encoder = config.model.whisper.encoder.toNativeUtf8();
c.ref.model.whisper.decoder = config.model.whisper.decoder.toNativeUtf8();
c.ref.model.whisper.language = config.model.whisper.language.toNativeUtf8();
c.ref.model.whisper.task = config.model.whisper.task.toNativeUtf8();
c.ref.model.whisper.tailPaddings = config.model.whisper.tailPaddings;
c.ref.model.tdnn.model = config.model.tdnn.model.toNativeUtf8();
c.ref.model.tokens = config.model.tokens.toNativeUtf8();
c.ref.model.numThreads = config.model.numThreads;
c.ref.model.debug = config.model.debug ? 1 : 0;
c.ref.model.provider = config.model.provider.toNativeUtf8();
c.ref.model.modelType = config.model.modelType.toNativeUtf8();
c.ref.model.modelingUnit = config.model.modelingUnit.toNativeUtf8();
c.ref.model.bpeVocab = config.model.bpeVocab.toNativeUtf8();
c.ref.model.telespeechCtc = config.model.telespeechCtc.toNativeUtf8();
c.ref.lm.model = config.lm.model.toNativeUtf8();
c.ref.lm.scale = config.lm.scale;
c.ref.decodingMethod = config.decodingMethod.toNativeUtf8();
c.ref.maxActivePaths = config.maxActivePaths;
c.ref.hotwordsFile = config.hotwordsFile.toNativeUtf8();
c.ref.hotwordsScore = config.hotwordsScore;
c.ref.ruleFsts = config.ruleFsts.toNativeUtf8();
c.ref.ruleFars = config.ruleFars.toNativeUtf8();
final ptr = SherpaOnnxBindings.createOfflineRecognizer?.call(c) ?? nullptr;
calloc.free(c.ref.ruleFars);
calloc.free(c.ref.ruleFsts);
calloc.free(c.ref.hotwordsFile);
calloc.free(c.ref.decodingMethod);
calloc.free(c.ref.lm.model);
calloc.free(c.ref.model.telespeechCtc);
calloc.free(c.ref.model.bpeVocab);
calloc.free(c.ref.model.modelingUnit);
calloc.free(c.ref.model.modelType);
calloc.free(c.ref.model.provider);
calloc.free(c.ref.model.tokens);
calloc.free(c.ref.model.tdnn.model);
calloc.free(c.ref.model.whisper.task);
calloc.free(c.ref.model.whisper.language);
calloc.free(c.ref.model.whisper.decoder);
calloc.free(c.ref.model.whisper.encoder);
calloc.free(c.ref.model.nemoCtc.model);
calloc.free(c.ref.model.paraformer.model);
calloc.free(c.ref.model.transducer.encoder);
calloc.free(c.ref.model.transducer.decoder);
calloc.free(c.ref.model.transducer.joiner);
calloc.free(c);
return OfflineRecognizer._(ptr: ptr, config: config);
}
/// The user has to invoke stream.free() on the returned instance
/// to avoid memory leak
OfflineStream createStream() {
final p = SherpaOnnxBindings.createOfflineStream?.call(ptr) ?? nullptr;
return OfflineStream(ptr: p);
}
void decode(OfflineStream stream) {
SherpaOnnxBindings.decodeOfflineStream?.call(ptr, stream.ptr);
}
OfflineRecognizerResult getResult(OfflineStream stream) {
final json =
SherpaOnnxBindings.getOfflineStreamResultAsJson?.call(stream.ptr) ??
nullptr;
if (json == nullptr) {
return OfflineRecognizerResult(text: '', tokens: [], timestamps: []);
}
final parsedJson = jsonDecode(json.toDartString());
SherpaOnnxBindings.destroyOfflineStreamResultJson?.call(json);
return OfflineRecognizerResult(
text: parsedJson['text'],
tokens: List<String>.from(parsedJson['tokens']),
timestamps: List<double>.from(parsedJson['timestamps']));
}
Pointer<SherpaOnnxOfflineRecognizer> ptr;
OfflineRecognizerConfig config;
}

View File

@@ -0,0 +1,37 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
class OfflineStream {
/// The user has to call OfflineStream.free() to avoid memory leak.
OfflineStream({required this.ptr});
void free() {
SherpaOnnxBindings.destroyOfflineStream?.call(ptr);
ptr = nullptr;
}
/// If you have List<double> data, then you can use
/// Float32List.fromList(data) to convert data to Float32List
///
/// See
/// https://api.flutter.dev/flutter/dart-core/List-class.html
/// and
/// https://api.flutter.dev/flutter/dart-typed_data/Float32List-class.html
void acceptWaveform({required Float32List samples, required int sampleRate}) {
final n = samples.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, samples);
SherpaOnnxBindings.acceptWaveformOffline?.call(ptr, sampleRate, p, n);
calloc.free(p);
}
Pointer<SherpaOnnxOfflineStream> ptr;
}

View File

@@ -0,0 +1,297 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:convert';
import 'dart:ffi';
import 'package:ffi/ffi.dart';
import './feature_config.dart';
import './online_stream.dart';
import './sherpa_onnx_bindings.dart';
class OnlineTransducerModelConfig {
const OnlineTransducerModelConfig({
this.encoder = '',
this.decoder = '',
this.joiner = '',
});
@override
String toString() {
return 'OnlineTransducerModelConfig(encoder: $encoder, decoder: $decoder, joiner: $joiner)';
}
final String encoder;
final String decoder;
final String joiner;
}
class OnlineParaformerModelConfig {
const OnlineParaformerModelConfig({this.encoder = '', this.decoder = ''});
@override
String toString() {
return 'OnlineParaformerModelConfig(encoder: $encoder, decoder: $decoder)';
}
final String encoder;
final String decoder;
}
class OnlineZipformer2CtcModelConfig {
const OnlineZipformer2CtcModelConfig({this.model = ''});
@override
String toString() {
return 'OnlineZipformer2CtcModelConfig(model: $model)';
}
final String model;
}
class OnlineModelConfig {
const OnlineModelConfig({
this.transducer = const OnlineTransducerModelConfig(),
this.paraformer = const OnlineParaformerModelConfig(),
this.zipformer2Ctc = const OnlineZipformer2CtcModelConfig(),
required this.tokens,
this.numThreads = 1,
this.provider = 'cpu',
this.debug = true,
this.modelType = '',
this.modelingUnit = '',
this.bpeVocab = '',
});
@override
String toString() {
return 'OnlineModelConfig(transducer: $transducer, paraformer: $paraformer, zipformer2Ctc: $zipformer2Ctc, tokens: $tokens, numThreads: $numThreads, provider: $provider, debug: $debug, modelType: $modelType, modelingUnit: $modelingUnit, bpeVocab: $bpeVocab)';
}
final OnlineTransducerModelConfig transducer;
final OnlineParaformerModelConfig paraformer;
final OnlineZipformer2CtcModelConfig zipformer2Ctc;
final String tokens;
final int numThreads;
final String provider;
final bool debug;
final String modelType;
final String modelingUnit;
final String bpeVocab;
}
class OnlineCtcFstDecoderConfig {
const OnlineCtcFstDecoderConfig({this.graph = '', this.maxActive = 3000});
@override
String toString() {
return 'OnlineCtcFstDecoderConfig(graph: $graph, maxActive: $maxActive)';
}
final String graph;
final int maxActive;
}
class OnlineRecognizerConfig {
const OnlineRecognizerConfig({
this.feat = const FeatureConfig(),
required this.model,
this.decodingMethod = 'greedy_search',
this.maxActivePaths = 4,
this.enableEndpoint = true,
this.rule1MinTrailingSilence = 2.4,
this.rule2MinTrailingSilence = 1.2,
this.rule3MinUtteranceLength = 20,
this.hotwordsFile = '',
this.hotwordsScore = 1.5,
this.ctcFstDecoderConfig = const OnlineCtcFstDecoderConfig(),
this.ruleFsts = '',
this.ruleFars = '',
});
@override
String toString() {
return 'OnlineRecognizerConfig(feat: $feat, model: $model, decodingMethod: $decodingMethod, maxActivePaths: $maxActivePaths, enableEndpoint: $enableEndpoint, rule1MinTrailingSilence: $rule1MinTrailingSilence, rule2MinTrailingSilence: $rule2MinTrailingSilence, rule3MinUtteranceLength: $rule3MinUtteranceLength, hotwordsFile: $hotwordsFile, hotwordsScore: $hotwordsScore, ctcFstDecoderConfig: $ctcFstDecoderConfig, ruleFsts: $ruleFsts, ruleFars: $ruleFars)';
}
final FeatureConfig feat;
final OnlineModelConfig model;
final String decodingMethod;
final int maxActivePaths;
final bool enableEndpoint;
final double rule1MinTrailingSilence;
final double rule2MinTrailingSilence;
final double rule3MinUtteranceLength;
final String hotwordsFile;
final double hotwordsScore;
final OnlineCtcFstDecoderConfig ctcFstDecoderConfig;
final String ruleFsts;
final String ruleFars;
}
class OnlineRecognizerResult {
OnlineRecognizerResult(
{required this.text, required this.tokens, required this.timestamps});
@override
String toString() {
return 'OnlineRecognizerResult(text: $text, tokens: $tokens, timestamps: $timestamps)';
}
final String text;
final List<String> tokens;
final List<double> timestamps;
}
class OnlineRecognizer {
OnlineRecognizer._({required this.ptr, required this.config});
/// The user is responsible to call the OnlineRecognizer.free()
/// method of the returned instance to avoid memory leak.
factory OnlineRecognizer(OnlineRecognizerConfig config) {
final c = calloc<SherpaOnnxOnlineRecognizerConfig>();
c.ref.feat.sampleRate = config.feat.sampleRate;
c.ref.feat.featureDim = config.feat.featureDim;
// transducer
c.ref.model.transducer.encoder =
config.model.transducer.encoder.toNativeUtf8();
c.ref.model.transducer.decoder =
config.model.transducer.decoder.toNativeUtf8();
c.ref.model.transducer.joiner =
config.model.transducer.joiner.toNativeUtf8();
// paraformer
c.ref.model.paraformer.encoder =
config.model.paraformer.encoder.toNativeUtf8();
c.ref.model.paraformer.decoder =
config.model.paraformer.decoder.toNativeUtf8();
// zipformer2Ctc
c.ref.model.zipformer2Ctc.model =
config.model.zipformer2Ctc.model.toNativeUtf8();
c.ref.model.tokens = config.model.tokens.toNativeUtf8();
c.ref.model.numThreads = config.model.numThreads;
c.ref.model.provider = config.model.provider.toNativeUtf8();
c.ref.model.debug = config.model.debug ? 1 : 0;
c.ref.model.modelType = config.model.modelType.toNativeUtf8();
c.ref.model.modelingUnit = config.model.modelingUnit.toNativeUtf8();
c.ref.model.bpeVocab = config.model.bpeVocab.toNativeUtf8();
c.ref.decodingMethod = config.decodingMethod.toNativeUtf8();
c.ref.maxActivePaths = config.maxActivePaths;
c.ref.enableEndpoint = config.enableEndpoint ? 1 : 0;
c.ref.rule1MinTrailingSilence = config.rule1MinTrailingSilence;
c.ref.rule2MinTrailingSilence = config.rule2MinTrailingSilence;
c.ref.rule3MinUtteranceLength = config.rule3MinUtteranceLength;
c.ref.hotwordsFile = config.hotwordsFile.toNativeUtf8();
c.ref.hotwordsScore = config.hotwordsScore;
c.ref.ctcFstDecoderConfig.graph =
config.ctcFstDecoderConfig.graph.toNativeUtf8();
c.ref.ctcFstDecoderConfig.maxActive = config.ctcFstDecoderConfig.maxActive;
c.ref.ruleFsts = config.ruleFsts.toNativeUtf8();
c.ref.ruleFars = config.ruleFars.toNativeUtf8();
final ptr = SherpaOnnxBindings.createOnlineRecognizer?.call(c) ?? nullptr;
calloc.free(c.ref.ruleFars);
calloc.free(c.ref.ruleFsts);
calloc.free(c.ref.ctcFstDecoderConfig.graph);
calloc.free(c.ref.hotwordsFile);
calloc.free(c.ref.decodingMethod);
calloc.free(c.ref.model.bpeVocab);
calloc.free(c.ref.model.modelingUnit);
calloc.free(c.ref.model.modelType);
calloc.free(c.ref.model.provider);
calloc.free(c.ref.model.tokens);
calloc.free(c.ref.model.zipformer2Ctc.model);
calloc.free(c.ref.model.paraformer.encoder);
calloc.free(c.ref.model.paraformer.decoder);
calloc.free(c.ref.model.transducer.encoder);
calloc.free(c.ref.model.transducer.decoder);
calloc.free(c.ref.model.transducer.joiner);
calloc.free(c);
return OnlineRecognizer._(ptr: ptr, config: config);
}
void free() {
SherpaOnnxBindings.destroyOnlineRecognizer?.call(ptr);
ptr = nullptr;
}
/// The user has to invoke stream.free() on the returned instance
/// to avoid memory leak
OnlineStream createStream({String hotwords = ''}) {
if (hotwords == '') {
final p = SherpaOnnxBindings.createOnlineStream?.call(ptr) ?? nullptr;
return OnlineStream(ptr: p);
}
final utf8 = hotwords.toNativeUtf8();
final p =
SherpaOnnxBindings.createOnlineStreamWithHotwords?.call(ptr, utf8) ??
nullptr;
calloc.free(utf8);
return OnlineStream(ptr: p);
}
bool isReady(OnlineStream stream) {
int ready =
SherpaOnnxBindings.isOnlineStreamReady?.call(ptr, stream.ptr) ?? 0;
return ready == 1;
}
OnlineRecognizerResult getResult(OnlineStream stream) {
final json =
SherpaOnnxBindings.getOnlineStreamResultAsJson?.call(ptr, stream.ptr) ??
nullptr;
if (json == nullptr) {
return OnlineRecognizerResult(text: '', tokens: [], timestamps: []);
}
final parsedJson = jsonDecode(json.toDartString());
SherpaOnnxBindings.destroyOnlineStreamResultJson?.call(json);
return OnlineRecognizerResult(
text: parsedJson['text'],
tokens: List<String>.from(parsedJson['tokens']),
timestamps: List<double>.from(parsedJson['timestamps']));
}
void reset(OnlineStream stream) {
SherpaOnnxBindings.reset?.call(ptr, stream.ptr);
}
void decode(OnlineStream stream) {
SherpaOnnxBindings.decodeOnlineStream?.call(ptr, stream.ptr);
}
bool isEndpoint(OnlineStream stream) {
int yes = SherpaOnnxBindings.isEndpoint?.call(ptr, stream.ptr) ?? 0;
return yes == 1;
}
Pointer<SherpaOnnxOnlineRecognizer> ptr;
OnlineRecognizerConfig config;
}

View File

@@ -0,0 +1,41 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
class OnlineStream {
/// The user has to call OnlineStream.free() to avoid memory leak.
OnlineStream({required this.ptr});
void free() {
SherpaOnnxBindings.destroyOnlineStream?.call(ptr);
ptr = nullptr;
}
/// If you have List<double> data, then you can use
/// Float32List.fromList(data) to convert data to Float32List
///
/// See
/// https://api.flutter.dev/flutter/dart-core/List-class.html
/// and
/// https://api.flutter.dev/flutter/dart-typed_data/Float32List-class.html
void acceptWaveform({required Float32List samples, required int sampleRate}) {
final n = samples.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, samples);
SherpaOnnxBindings.onlineStreamAcceptWaveform?.call(ptr, sampleRate, p, n);
calloc.free(p);
}
void inputFinished() {
SherpaOnnxBindings.onlineStreamInputFinished?.call(ptr);
}
Pointer<SherpaOnnxOnlineStream> ptr;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,268 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './online_stream.dart';
import './sherpa_onnx_bindings.dart';
class SpeakerEmbeddingExtractorConfig {
const SpeakerEmbeddingExtractorConfig(
{required this.model,
this.numThreads = 1,
this.debug = true,
this.provider = 'cpu'});
@override
String toString() {
return 'SpeakerEmbeddingExtractorConfig(model: $model, numThreads: $numThreads, debug: $debug, provider: $provider)';
}
final String model;
final int numThreads;
final bool debug;
final String provider;
}
class SpeakerEmbeddingExtractor {
SpeakerEmbeddingExtractor._({required this.ptr, required this.dim});
/// The user is responsible to call the SpeakerEmbeddingExtractor.free()
/// method of the returned instance to avoid memory leak.
factory SpeakerEmbeddingExtractor(
{required SpeakerEmbeddingExtractorConfig config}) {
final c = calloc<SherpaOnnxSpeakerEmbeddingExtractorConfig>();
final modelPtr = config.model.toNativeUtf8();
c.ref.model = modelPtr;
c.ref.numThreads = config.numThreads;
c.ref.debug = config.debug ? 1 : 0;
final providerPtr = config.provider.toNativeUtf8();
c.ref.provider = providerPtr;
final ptr =
SherpaOnnxBindings.createSpeakerEmbeddingExtractor?.call(c) ?? nullptr;
calloc.free(providerPtr);
calloc.free(modelPtr);
calloc.free(c);
final dim = SherpaOnnxBindings.speakerEmbeddingExtractorDim?.call(ptr) ?? 0;
return SpeakerEmbeddingExtractor._(ptr: ptr, dim: dim);
}
void free() {
SherpaOnnxBindings.destroySpeakerEmbeddingExtractor?.call(ptr);
ptr = nullptr;
}
/// The user has to invoke stream.free() on the returned instance
/// to avoid memory leak
OnlineStream createStream() {
final p =
SherpaOnnxBindings.speakerEmbeddingExtractorCreateStream?.call(ptr) ??
nullptr;
return OnlineStream(ptr: p);
}
bool isReady(OnlineStream stream) {
final int ready = SherpaOnnxBindings.speakerEmbeddingExtractorIsReady
?.call(ptr, stream.ptr) ??
0;
return ready == 1;
}
Float32List compute(OnlineStream stream) {
final Pointer<Float> embedding = SherpaOnnxBindings
.speakerEmbeddingExtractorComputeEmbedding
?.call(ptr, stream.ptr) ??
nullptr;
if (embedding == nullptr) {
return Float32List(0);
}
final embeddingList = embedding.asTypedList(dim);
final ans = Float32List(dim);
ans.setAll(0, embeddingList);
SherpaOnnxBindings.speakerEmbeddingExtractorDestroyEmbedding
?.call(embedding);
return ans;
}
Pointer<SherpaOnnxSpeakerEmbeddingExtractor> ptr;
final int dim;
}
class SpeakerEmbeddingManager {
SpeakerEmbeddingManager._({required this.ptr, required this.dim});
// The user has to use SpeakerEmbeddingManager.free() to avoid memory leak
factory SpeakerEmbeddingManager(int dim) {
final p =
SherpaOnnxBindings.createSpeakerEmbeddingManager?.call(dim) ?? nullptr;
return SpeakerEmbeddingManager._(ptr: p, dim: dim);
}
void free() {
SherpaOnnxBindings.destroySpeakerEmbeddingManager?.call(ptr);
ptr = nullptr;
}
/// Return true if added successfully; return false otherwise
bool add({required String name, required Float32List embedding}) {
assert(embedding.length == dim, '${embedding.length} vs $dim');
final Pointer<Utf8> namePtr = name.toNativeUtf8();
final int n = embedding.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, embedding);
final int ok =
SherpaOnnxBindings.speakerEmbeddingManagerAdd?.call(ptr, namePtr, p) ??
0;
calloc.free(p);
calloc.free(namePtr);
return ok == 1;
}
bool addMulti(
{required String name, required List<Float32List> embeddingList}) {
final Pointer<Utf8> namePtr = name.toNativeUtf8();
final int n = embeddingList.length;
final Pointer<Float> p = calloc<Float>(n * dim);
final pList = p.asTypedList(n * dim);
int offset = 0;
for (final e in embeddingList) {
assert(e.length == dim, '${e.length} vs $dim');
pList.setAll(offset, e);
offset += dim;
}
final int ok = SherpaOnnxBindings.speakerEmbeddingManagerAddListFlattened
?.call(ptr, namePtr, p, n) ??
0;
calloc.free(p);
calloc.free(namePtr);
return ok == 1;
}
bool contains(String name) {
final Pointer<Utf8> namePtr = name.toNativeUtf8();
final int found = SherpaOnnxBindings.speakerEmbeddingManagerContains
?.call(ptr, namePtr) ??
0;
calloc.free(namePtr);
return found == 1;
}
bool remove(String name) {
final Pointer<Utf8> namePtr = name.toNativeUtf8();
final int ok =
SherpaOnnxBindings.speakerEmbeddingManagerRemove?.call(ptr, namePtr) ??
0;
calloc.free(namePtr);
return ok == 1;
}
/// Return an empty string if no speaker is found
String search({required Float32List embedding, required double threshold}) {
assert(embedding.length == dim);
final Pointer<Float> p = calloc<Float>(dim);
final pList = p.asTypedList(dim);
pList.setAll(0, embedding);
final Pointer<Utf8> name = SherpaOnnxBindings.speakerEmbeddingManagerSearch
?.call(ptr, p, threshold) ??
nullptr;
calloc.free(p);
if (name == nullptr) {
return '';
}
final String ans = name.toDartString();
SherpaOnnxBindings.speakerEmbeddingManagerFreeSearch?.call(name);
return ans;
}
bool verify(
{required String name,
required Float32List embedding,
required double threshold}) {
assert(embedding.length == dim);
final Pointer<Utf8> namePtr = name.toNativeUtf8();
final Pointer<Float> p = calloc<Float>(dim);
final pList = p.asTypedList(dim);
pList.setAll(0, embedding);
final int ok = SherpaOnnxBindings.speakerEmbeddingManagerVerify
?.call(ptr, namePtr, p, threshold) ??
0;
calloc.free(p);
calloc.free(namePtr);
return ok == 1;
}
int get numSpeakers =>
SherpaOnnxBindings.speakerEmbeddingManagerNumSpeakers?.call(ptr) ?? 0;
List<String> get allSpeakerNames {
int n = numSpeakers;
if (n == 0) {
return <String>[];
}
final Pointer<Pointer<Utf8>> names =
SherpaOnnxBindings.speakerEmbeddingManagerGetAllSpeakers?.call(ptr) ??
nullptr;
if (names == nullptr) {
return <String>[];
}
final ans = <String>[];
// see https://api.flutter.dev/flutter/dart-ffi/PointerPointer.html
for (int i = 0; i != n; ++i) {
String name = names[i].toDartString();
ans.add(name);
}
SherpaOnnxBindings.speakerEmbeddingManagerFreeAllSpeakers?.call(names);
return ans;
}
Pointer<SherpaOnnxSpeakerEmbeddingManager> ptr;
final int dim;
}

View File

@@ -0,0 +1,193 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
class OfflineTtsVitsModelConfig {
const OfflineTtsVitsModelConfig({
required this.model,
this.lexicon = '',
required this.tokens,
this.dataDir = '',
this.noiseScale = 0.667,
this.noiseScaleW = 0.8,
this.lengthScale = 1.0,
this.dictDir = '',
});
@override
String toString() {
return 'OfflineTtsVitsModelConfig(model: $model, lexicon: $lexicon, tokens: $tokens, dataDir: $dataDir, noiseScale: $noiseScale, noiseScaleW: $noiseScaleW, lengthScale: $lengthScale, dictDir: $dictDir)';
}
final String model;
final String lexicon;
final String tokens;
final String dataDir;
final double noiseScale;
final double noiseScaleW;
final double lengthScale;
final String dictDir;
}
class OfflineTtsModelConfig {
const OfflineTtsModelConfig({
required this.vits,
this.numThreads = 1,
this.debug = true,
this.provider = 'cpu',
});
@override
String toString() {
return 'OfflineTtsModelConfig(vits: $vits, numThreads: $numThreads, debug: $debug, provider: $provider)';
}
final OfflineTtsVitsModelConfig vits;
final int numThreads;
final bool debug;
final String provider;
}
class OfflineTtsConfig {
const OfflineTtsConfig({
required this.model,
this.ruleFsts = '',
this.maxNumSenetences = 1,
this.ruleFars = '',
});
@override
String toString() {
return 'OfflineTtsConfig(model: $model, ruleFsts: $ruleFsts, maxNumSenetences: $maxNumSenetences, ruleFars: $ruleFars)';
}
final OfflineTtsModelConfig model;
final String ruleFsts;
final int maxNumSenetences;
final String ruleFars;
}
class GeneratedAudio {
GeneratedAudio({
required this.samples,
required this.sampleRate,
});
final Float32List samples;
final int sampleRate;
}
class OfflineTts {
OfflineTts._({required this.ptr, required this.config});
/// The user is responsible to call the OfflineTts.free()
/// method of the returned instance to avoid memory leak.
factory OfflineTts(OfflineTtsConfig config) {
final c = calloc<SherpaOnnxOfflineTtsConfig>();
c.ref.model.vits.model = config.model.vits.model.toNativeUtf8();
c.ref.model.vits.lexicon = config.model.vits.lexicon.toNativeUtf8();
c.ref.model.vits.tokens = config.model.vits.tokens.toNativeUtf8();
c.ref.model.vits.dataDir = config.model.vits.dataDir.toNativeUtf8();
c.ref.model.vits.noiseScale = config.model.vits.noiseScale;
c.ref.model.vits.noiseScaleW = config.model.vits.noiseScaleW;
c.ref.model.vits.lengthScale = config.model.vits.lengthScale;
c.ref.model.vits.dictDir = config.model.vits.dictDir.toNativeUtf8();
c.ref.model.numThreads = config.model.numThreads;
c.ref.model.debug = config.model.debug ? 1 : 0;
c.ref.model.provider = config.model.provider.toNativeUtf8();
c.ref.ruleFsts = config.ruleFsts.toNativeUtf8();
c.ref.maxNumSenetences = config.maxNumSenetences;
c.ref.ruleFars = config.ruleFars.toNativeUtf8();
final ptr = SherpaOnnxBindings.createOfflineTts?.call(c) ?? nullptr;
calloc.free(c.ref.ruleFars);
calloc.free(c.ref.ruleFsts);
calloc.free(c.ref.model.provider);
calloc.free(c.ref.model.vits.dictDir);
calloc.free(c.ref.model.vits.dataDir);
calloc.free(c.ref.model.vits.tokens);
calloc.free(c.ref.model.vits.lexicon);
calloc.free(c.ref.model.vits.model);
return OfflineTts._(ptr: ptr, config: config);
}
void free() {
SherpaOnnxBindings.destroyOfflineTts?.call(ptr);
ptr = nullptr;
}
GeneratedAudio generate(
{required String text, int sid = 0, double speed = 1.0}) {
final Pointer<Utf8> textPtr = text.toNativeUtf8();
final p =
SherpaOnnxBindings.offlineTtsGenerate?.call(ptr, textPtr, sid, speed) ??
nullptr;
calloc.free(textPtr);
if (p == nullptr) {
return GeneratedAudio(samples: Float32List(0), sampleRate: 0);
}
final samples = p.ref.samples.asTypedList(p.ref.n);
final sampleRate = p.ref.sampleRate;
final newSamples = Float32List.fromList(samples);
SherpaOnnxBindings.destroyOfflineTtsGeneratedAudio?.call(p);
return GeneratedAudio(samples: newSamples, sampleRate: sampleRate);
}
GeneratedAudio generateWithCallback(
{required String text,
int sid = 0,
double speed = 1.0,
required int Function(Float32List samples) callback}) {
// see
// https://github.com/dart-lang/sdk/issues/54276#issuecomment-1846109285
// https://stackoverflow.com/questions/69537440/callbacks-in-dart-dartffi-only-supports-calling-static-dart-functions-from-nat
// https://github.com/dart-lang/sdk/blob/main/tests/ffi/isolate_local_function_callbacks_test.dart#L46
final wrapper =
NativeCallable<SherpaOnnxGeneratedAudioCallbackNative>.isolateLocal(
(Pointer<Float> samples, int n) {
final s = samples.asTypedList(n);
final newSamples = Float32List.fromList(s);
return callback(newSamples);
}, exceptionalReturn: 0);
final Pointer<Utf8> textPtr = text.toNativeUtf8();
final p = SherpaOnnxBindings.offlineTtsGenerateWithCallback
?.call(ptr, textPtr, sid, speed, wrapper.nativeFunction) ??
nullptr;
calloc.free(textPtr);
wrapper.close();
if (p == nullptr) {
return GeneratedAudio(samples: Float32List(0), sampleRate: 0);
}
final samples = p.ref.samples.asTypedList(p.ref.n);
final sampleRate = p.ref.sampleRate;
final newSamples = Float32List.fromList(samples);
SherpaOnnxBindings.destroyOfflineTtsGeneratedAudio?.call(p);
return GeneratedAudio(samples: newSamples, sampleRate: sampleRate);
}
int get sampleRate => SherpaOnnxBindings.offlineTtsSampleRate?.call(ptr) ?? 0;
int get numSpeakers =>
SherpaOnnxBindings.offlineTtsNumSpeakers?.call(ptr) ?? 0;
Pointer<SherpaOnnxOfflineTts> ptr;
OfflineTtsConfig config;
}

View File

@@ -0,0 +1,212 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
class SileroVadModelConfig {
const SileroVadModelConfig(
{this.model = '',
this.threshold = 0.5,
this.minSilenceDuration = 0.5,
this.minSpeechDuration = 0.25,
this.windowSize = 512});
@override
String toString() {
return 'SileroVadModelConfig(model: $model, threshold: $threshold, minSilenceDuration: $minSilenceDuration, minSpeechDuration: $minSpeechDuration, windowSize: $windowSize)';
}
final String model;
final double threshold;
final double minSilenceDuration;
final double minSpeechDuration;
final int windowSize;
}
class VadModelConfig {
VadModelConfig(
{this.sileroVad = const SileroVadModelConfig(),
this.sampleRate = 16000,
this.numThreads = 1,
this.provider = 'cpu',
this.debug = true});
@override
String toString() {
return 'VadModelConfig(sileroVad: $sileroVad, sampleRate: $sampleRate, numThreads: $numThreads, provider: $provider, debug: $debug)';
}
final SileroVadModelConfig sileroVad;
final int sampleRate;
final int numThreads;
final String provider;
final bool debug;
}
class SpeechSegment {
SpeechSegment({required this.samples, required this.start});
final Float32List samples;
final int start;
}
class CircularBuffer {
CircularBuffer._({required this.ptr});
/// The user has to invoke CircularBuffer.free() on the returned instance
/// to avoid memory leak.
factory CircularBuffer({required int capacity}) {
assert(capacity > 0, 'capacity is $capacity');
final p =
SherpaOnnxBindings.createCircularBuffer?.call(capacity) ?? nullptr;
return CircularBuffer._(ptr: p);
}
void free() {
SherpaOnnxBindings.destroyCircularBuffer?.call(ptr);
ptr = nullptr;
}
void push(Float32List data) {
final n = data.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, data);
SherpaOnnxBindings.circularBufferPush?.call(ptr, p, n);
calloc.free(p);
}
Float32List get({required int startIndex, required int n}) {
final Pointer<Float> p =
SherpaOnnxBindings.circularBufferGet?.call(ptr, startIndex, n) ??
nullptr;
if (p == nullptr) {
return Float32List(0);
}
final pList = p.asTypedList(n);
final Float32List ans = Float32List.fromList(pList);
SherpaOnnxBindings.circularBufferFree?.call(p);
return ans;
}
void pop(int n) {
SherpaOnnxBindings.circularBufferPop?.call(ptr, n);
}
void reset() {
SherpaOnnxBindings.circularBufferReset?.call(ptr);
}
int get size => SherpaOnnxBindings.circularBufferSize?.call(ptr) ?? 0;
int get head => SherpaOnnxBindings.circularBufferHead?.call(ptr) ?? 0;
Pointer<SherpaOnnxCircularBuffer> ptr;
}
class VoiceActivityDetector {
VoiceActivityDetector._({required this.ptr, required this.config});
// The user has to invoke VoiceActivityDetector.free() to avoid memory leak.
factory VoiceActivityDetector(
{required VadModelConfig config, required double bufferSizeInSeconds}) {
final c = calloc<SherpaOnnxVadModelConfig>();
final modelPtr = config.sileroVad.model.toNativeUtf8();
c.ref.sileroVad.model = modelPtr;
c.ref.sileroVad.threshold = config.sileroVad.threshold;
c.ref.sileroVad.minSilenceDuration = config.sileroVad.minSilenceDuration;
c.ref.sileroVad.minSpeechDuration = config.sileroVad.minSpeechDuration;
c.ref.sileroVad.windowSize = config.sileroVad.windowSize;
c.ref.sampleRate = config.sampleRate;
c.ref.numThreads = config.numThreads;
final providerPtr = config.provider.toNativeUtf8();
c.ref.provider = providerPtr;
c.ref.debug = config.debug ? 1 : 0;
final ptr = SherpaOnnxBindings.createVoiceActivityDetector
?.call(c, bufferSizeInSeconds) ??
nullptr;
calloc.free(providerPtr);
calloc.free(modelPtr);
calloc.free(c);
return VoiceActivityDetector._(ptr: ptr, config: config);
}
void free() {
SherpaOnnxBindings.destroyVoiceActivityDetector?.call(ptr);
ptr = nullptr;
}
void acceptWaveform(Float32List samples) {
final n = samples.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, samples);
SherpaOnnxBindings.voiceActivityDetectorAcceptWaveform?.call(ptr, p, n);
calloc.free(p);
}
bool isEmpty() {
final int empty =
SherpaOnnxBindings.voiceActivityDetectorEmpty?.call(ptr) ?? 0;
return empty == 1;
}
bool isDetected() {
final int detected =
SherpaOnnxBindings.voiceActivityDetectorDetected?.call(ptr) ?? 0;
return detected == 1;
}
void pop() {
SherpaOnnxBindings.voiceActivityDetectorPop?.call(ptr);
}
void clear() {
SherpaOnnxBindings.voiceActivityDetectorClear?.call(ptr);
}
SpeechSegment front() {
final Pointer<SherpaOnnxSpeechSegment> segment =
SherpaOnnxBindings.voiceActivityDetectorFront?.call(ptr) ?? nullptr;
if (segment == nullptr) {
return SpeechSegment(samples: Float32List(0), start: 0);
}
final sampleList = segment.ref.samples.asTypedList(segment.ref.n);
final start = segment.ref.start;
final samples = Float32List.fromList(sampleList);
SherpaOnnxBindings.destroySpeechSegment?.call(segment);
return SpeechSegment(samples: samples, start: start);
}
void reset() {
SherpaOnnxBindings.voiceActivityDetectorReset?.call(ptr);
}
Pointer<SherpaOnnxVoiceActivityDetector> ptr;
final VadModelConfig config;
}

View File

@@ -0,0 +1,33 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
class WaveData {
WaveData({required this.samples, required this.sampleRate});
/// normalized to [-1, 1]
Float32List samples;
int sampleRate;
}
WaveData readWave(String filename) {
final Pointer<Utf8> str = filename.toNativeUtf8();
Pointer<SherpaOnnxWave> wave =
SherpaOnnxBindings.readWave?.call(str) ?? nullptr;
calloc.free(str);
if (wave == nullptr) {
return WaveData(samples: Float32List(0), sampleRate: 0);
}
final samples = wave.ref.samples.asTypedList(wave.ref.numSamples);
final newSamples = Float32List.fromList(samples);
int sampleRate = wave.ref.sampleRate;
SherpaOnnxBindings.freeWave?.call(wave);
return WaveData(samples: newSamples, sampleRate: sampleRate);
}

View File

@@ -0,0 +1,27 @@
// Copyright (c) 2024 Xiaomi Corporation
import 'dart:ffi';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
import './sherpa_onnx_bindings.dart';
bool writeWave(
{required String filename,
required Float32List samples,
required int sampleRate}) {
final Pointer<Utf8> filenamePtr = filename.toNativeUtf8();
final n = samples.length;
final Pointer<Float> p = calloc<Float>(n);
final pList = p.asTypedList(n);
pList.setAll(0, samples);
int ok =
SherpaOnnxBindings.writeWave?.call(p, n, sampleRate, filenamePtr) ?? 0;
calloc.free(p);
calloc.free(filenamePtr);
return ok == 1;
}

View File

@@ -0,0 +1,58 @@
name: sherpa_onnx
description: >
Speech recognition, speech synthesis, and speaker recognition using next-gen Kaldi
with onnxruntime without Internet connection.
repository: https://github.com/k2-fsa/sherpa-onnx/tree/master/sherpa-onnx/flutter
issue_tracker: https://github.com/k2-fsa/sherpa-onnx/issues
documentation: https://k2-fsa.github.io/sherpa/onnx/
topics:
- speech-recognition
- speech-synthesis
- speaker-identification
- audio-tagging
- voice-activity-detection
# remember to change the version in ../sherpa_onnx_macos/macos/sherpa_onnx.podspec
version: 1.10.7
homepage: https://github.com/k2-fsa/sherpa-onnx
environment:
sdk: '>=3.4.0 <4.0.0'
flutter: '>=3.3.0'
dependencies:
ffi: ^2.1.0
flutter:
sdk: flutter
sherpa_onnx_android:
# path: ../sherpa_onnx_android
sherpa_onnx_macos:
# path: ../sherpa_onnx_macos
sherpa_onnx_linux:
# path: ../sherpa_onnx_linux
#
sherpa_onnx_windows:
# path: ../sherpa_onnx_windows
flutter:
plugin:
platforms:
android:
default_package: sherpa_onnx_android
macos:
default_package: sherpa_onnx_macos
linux:
default_package: sherpa_onnx_linux
windows:
default_package: sherpa_onnx_windows

29
flutter/sherpa_onnx_android/.gitignore vendored Normal file
View File

@@ -0,0 +1,29 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
build/

View File

@@ -0,0 +1,30 @@
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: "5dcb86f68f239346676ceb1ed1ea385bd215fba1"
channel: "stable"
project_type: plugin_ffi
# Tracks metadata for the flutter migrate command
migration:
platforms:
- platform: root
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
- platform: android
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
# User provided section
# List of Local paths (relative to this file) that should be
# ignored by the migrate tool.
#
# Files that are not part of the templates will be ignored by default.
unmanaged_files:
- 'lib/main.dart'
- 'ios/Runner.xcodeproj/project.pbxproj'

View File

@@ -0,0 +1,7 @@
# sherpa_onnx_android
This is a sub project of [sherpa-onnx](https://github.com/k2-fsa/sherpa-onnx).
You are not expected to use this package directly.
Please see the entry point at <https://pub.dev/packages/sherpa_onnx>.

View File

@@ -0,0 +1,4 @@
include: package:flutter_lints/flutter.yaml
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

View File

@@ -0,0 +1,9 @@
*.iml
.gradle
/local.properties
/.idea/workspace.xml
/.idea/libraries
.DS_Store
/build
/captures
.cxx

View File

@@ -0,0 +1,48 @@
// The Android Gradle Plugin builds the native code with the Android NDK.
group = "com.k2fsa.sherpa.onnx.sherpa_onnx_android"
version = "1.0"
buildscript {
repositories {
google()
mavenCentral()
}
dependencies {
// The Android Gradle Plugin knows how to build native code with the NDK.
classpath("com.android.tools.build:gradle:7.3.0")
}
}
rootProject.allprojects {
repositories {
google()
mavenCentral()
}
}
apply plugin: "com.android.library"
android {
namespace 'com.k2fsa.sherpa.onnx'
// Bumping the plugin compileSdk version requires all clients of this plugin
// to bump the version in their app.
compileSdk = 34
// Use the NDK version
// declared in /android/app/build.gradle file of the Flutter project.
// Replace it with a version number if this plugin requires a specific NDK version.
// (e.g. ndkVersion "23.1.7779620")
ndkVersion = android.ndkVersion
compileOptions {
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8
}
defaultConfig {
minSdk = 21
}
}

View File

@@ -0,0 +1 @@
rootProject.name = 'sherpa_onnx_android'

View File

@@ -0,0 +1,3 @@
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.k2fsa.sherpa.onnx">
</manifest>

View File

@@ -0,0 +1,23 @@
# Introduction
Pre-built libs are not checked-in.
Please use
- https://github.com/k2-fsa/sherpa-onnx/blob/master/build-android-arm64-v8a.sh
- https://github.com/k2-fsa/sherpa-onnx/blob/master/build-android-armv7-eabi.sh
- https://github.com/k2-fsa/sherpa-onnx/blob/master/build-android-x86-64.sh
- https://github.com/k2-fsa/sherpa-onnx/blob/master/build-android-x86.sh
The following is an example for `arm64-v8a`:
```bash
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
export SHERPA_ONNX_ENABLE_JNI=OFF
export SHERPA_ONNX_ENABLE_C_API=ON
./build-android-arm64-v8a.sh
cp -v build-android-arm64-v8a/install/lib/*.so flutter/sherpa_onnx_android/android/src/main/jniLibs/arm64-v8a/
```

View File

@@ -0,0 +1,43 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
**/doc/api/
**/ios/Flutter/.last_build_id
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
.pub-cache/
.pub/
/build/
# Symbolication related
app.*.symbols
# Obfuscation related
app.*.map.json
# Android Studio will place build artifacts here
/android/app/debug
/android/app/profile
/android/app/release

View File

@@ -0,0 +1,9 @@
# Introduction
Please find examples at
https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples
and
https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples

View File

@@ -0,0 +1,18 @@
# sherpa-onnx app example
## Streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/streaming-asr
## Non-streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/non-streaming-asr
## Text to speech (TTS)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/tts
## Voice activity detection (VAD)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/vad

View File

View File

@@ -0,0 +1,18 @@
name: sherpa_onnx_android
description: "A new Flutter FFI plugin project."
version: 0.0.1
homepage:
environment:
sdk: '>=3.4.0 <4.0.0'
flutter: '>=3.3.0'
dependencies:
flutter:
sdk: flutter
flutter:
plugin:
platforms:
android:
ffiPlugin: true

29
flutter/sherpa_onnx_linux/.gitignore vendored Normal file
View File

@@ -0,0 +1,29 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
build/

View File

@@ -0,0 +1,30 @@
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: "5dcb86f68f239346676ceb1ed1ea385bd215fba1"
channel: "stable"
project_type: plugin_ffi
# Tracks metadata for the flutter migrate command
migration:
platforms:
- platform: root
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
- platform: linux
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
# User provided section
# List of Local paths (relative to this file) that should be
# ignored by the migrate tool.
#
# Files that are not part of the templates will be ignored by default.
unmanaged_files:
- 'lib/main.dart'
- 'ios/Runner.xcodeproj/project.pbxproj'

View File

@@ -0,0 +1,7 @@
# sherpa_onnx_linux
This is a sub project of [sherpa-onnx](https://github.com/k2-fsa/sherpa-onnx).
You are not expected to use this package directly.
Please see the entry point at <https://pub.dev/packages/sherpa_onnx>.

View File

@@ -0,0 +1,4 @@
include: package:flutter_lints/flutter.yaml
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

View File

@@ -0,0 +1,43 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
**/doc/api/
**/ios/Flutter/.last_build_id
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
.pub-cache/
.pub/
/build/
# Symbolication related
app.*.symbols
# Obfuscation related
app.*.map.json
# Android Studio will place build artifacts here
/android/app/debug
/android/app/profile
/android/app/release

View File

@@ -0,0 +1,9 @@
# Introduction
Please find examples at
https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples
and
https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples

View File

@@ -0,0 +1,18 @@
# sherpa-onnx app example
## Streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/streaming-asr
## Non-streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/non-streaming-asr
## Text to speech (TTS)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/tts
## Voice activity detection (VAD)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/vad

View File

View File

@@ -0,0 +1,27 @@
# The Flutter tooling requires that developers have CMake 3.10 or later
# installed. You should not increase this version, as doing so will cause
# the plugin to fail to compile for some customers of the plugin.
cmake_minimum_required(VERSION 3.10)
# Project-level configuration.
set(PROJECT_NAME "sherpa_onnx_linux")
project(${PROJECT_NAME} LANGUAGES CXX)
# List of absolute paths to libraries that should be bundled with the plugin.
# This list could contain prebuilt libraries, or libraries created by an
# external build triggered from this build file.
set(sherpa_onnx_linux_bundled_libraries
"${CMAKE_CURRENT_SOURCE_DIR}/libsherpa-onnx-c-api.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libsherpa-onnx-core.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libkaldi-decoder-core.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libsherpa-onnx-kaldifst-core.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libsherpa-onnx-fstfar.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libsherpa-onnx-fst.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libkaldi-native-fbank-core.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libpiper_phonemize.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libespeak-ng.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libucd.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libonnxruntime.so"
"${CMAKE_CURRENT_SOURCE_DIR}/libssentencepiece_core.so"
PARENT_SCOPE
)

View File

@@ -0,0 +1,5 @@
# Introduction
`*.so` files are generated dynamically using GitHub actions during a new release.
We don't check-in pre-built library files into git.

View File

@@ -0,0 +1,18 @@
name: sherpa_onnx_linux
description: "A new Flutter FFI plugin project."
version: 0.0.1
homepage:
environment:
sdk: '>=3.4.0 <4.0.0'
flutter: '>=3.3.0'
dependencies:
flutter:
sdk: flutter
flutter:
plugin:
platforms:
linux:
ffiPlugin: true

29
flutter/sherpa_onnx_macos/.gitignore vendored Normal file
View File

@@ -0,0 +1,29 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
build/

View File

@@ -0,0 +1,30 @@
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: "5dcb86f68f239346676ceb1ed1ea385bd215fba1"
channel: "stable"
project_type: plugin_ffi
# Tracks metadata for the flutter migrate command
migration:
platforms:
- platform: root
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
- platform: macos
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
# User provided section
# List of Local paths (relative to this file) that should be
# ignored by the migrate tool.
#
# Files that are not part of the templates will be ignored by default.
unmanaged_files:
- 'lib/main.dart'
- 'ios/Runner.xcodeproj/project.pbxproj'

View File

@@ -0,0 +1,7 @@
# sherpa_onnx_linux
This is a sub project of [sherpa-onnx](https://github.com/k2-fsa/sherpa-onnx).
You are not expected to use this package directly.
Please see the entry point at <https://pub.dev/packages/sherpa_onnx>.

View File

@@ -0,0 +1,4 @@
include: package:flutter_lints/flutter.yaml
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

View File

@@ -0,0 +1,9 @@
# Introduction
Please find examples at
https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples
and
https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples

View File

@@ -0,0 +1,18 @@
# sherpa-onnx app example
## Streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/streaming-asr
## Non-streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/non-streaming-asr
## Text to speech (TTS)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/tts
## Voice activity detection (VAD)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/vad

View File

View File

@@ -0,0 +1,5 @@
# Introduction
`*.dylib` files are generated dynamically using GitHub actions during a new release.
We don't check-in pre-built library files into git.

View File

@@ -0,0 +1,27 @@
#
# To learn more about a Podspec see http://guides.cocoapods.org/syntax/podspec.html.
# Run `pod lib lint sherpa_onnx_macos.podspec` to validate before publishing.
#
Pod::Spec.new do |s|
s.name = 'sherpa_onnx_macos'
s.version = '1.10.6'
s.summary = 'sherpa-onnx Flutter FFI plugin project.'
s.description = <<-DESC
sherpa-onnx Flutter FFI plugin project.
DESC
s.homepage = 'https://github.com/k2-fsa/sherpa-onnx'
s.license = { :file => '../LICENSE' }
s.author = { 'Fangjun Kuang' => 'csukuangfj@gmail.com' }
# This will ensure the source files in Classes/ are included in the native
# builds of apps using this FFI plugin. Podspec does not support relative
# paths, so Classes contains a forwarder C file that relatively imports
# `../src/*` so that the C sources can be shared among all target platforms.
s.source = { :path => '.' }
s.dependency 'FlutterMacOS'
s.vendored_libraries = '*.dylib'
s.platform = :osx, '10.11'
s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES' }
s.swift_version = '5.0'
end

View File

@@ -0,0 +1,18 @@
name: sherpa_onnx_macos
description: "A new Flutter FFI plugin project."
version: 0.0.1
homepage:
environment:
sdk: '>=3.4.0 <4.0.0'
flutter: '>=3.3.0'
dependencies:
flutter:
sdk: flutter
flutter:
plugin:
platforms:
macos:
ffiPlugin: true

29
flutter/sherpa_onnx_windows/.gitignore vendored Normal file
View File

@@ -0,0 +1,29 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
build/

View File

@@ -0,0 +1,30 @@
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: "5dcb86f68f239346676ceb1ed1ea385bd215fba1"
channel: "stable"
project_type: plugin_ffi
# Tracks metadata for the flutter migrate command
migration:
platforms:
- platform: root
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
- platform: windows
create_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
base_revision: 5dcb86f68f239346676ceb1ed1ea385bd215fba1
# User provided section
# List of Local paths (relative to this file) that should be
# ignored by the migrate tool.
#
# Files that are not part of the templates will be ignored by default.
unmanaged_files:
- 'lib/main.dart'
- 'ios/Runner.xcodeproj/project.pbxproj'

View File

@@ -0,0 +1,7 @@
# sherpa_onnx_linux
This is a sub project of [sherpa-onnx](https://github.com/k2-fsa/sherpa-onnx).
You are not expected to use this package directly.
Please see the entry point at <https://pub.dev/packages/sherpa_onnx>.

View File

@@ -0,0 +1,4 @@
include: package:flutter_lints/flutter.yaml
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

View File

@@ -0,0 +1,43 @@
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
**/doc/api/
**/ios/Flutter/.last_build_id
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
.pub-cache/
.pub/
/build/
# Symbolication related
app.*.symbols
# Obfuscation related
app.*.map.json
# Android Studio will place build artifacts here
/android/app/debug
/android/app/profile
/android/app/release

View File

@@ -0,0 +1,9 @@
# Introduction
Please find examples at
https://github.com/k2-fsa/sherpa-onnx/tree/master/flutter-examples
and
https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples

View File

@@ -0,0 +1,18 @@
# sherpa-onnx app example
## Streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/streaming-asr
## Non-streaming speech recognition
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/non-streaming-asr
## Text to speech (TTS)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/tts
## Voice activity detection (VAD)
Please see https://github.com/k2-fsa/sherpa-onnx/tree/master/dart-api-examples/vad

View File

View File

@@ -0,0 +1,18 @@
name: sherpa_onnx_windows
description: "A new Flutter FFI plugin project."
version: 0.0.1
homepage:
environment:
sdk: '>=3.4.0 <4.0.0'
flutter: '>=3.3.0'
dependencies:
flutter:
sdk: flutter
flutter:
plugin:
platforms:
windows:
ffiPlugin: true

View File

@@ -0,0 +1,17 @@
flutter/
# Visual Studio user-specific files.
*.suo
*.user
*.userosscache
*.sln.docstates
# Visual Studio build-related files.
x64/
x86/
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/

View File

@@ -0,0 +1,28 @@
# The Flutter tooling requires that developers have a version of Visual Studio
# installed that includes CMake 3.14 or later. You should not increase this
# version, as doing so will cause the plugin to fail to compile for some
# customers of the plugin.
cmake_minimum_required(VERSION 3.14)
# Project-level configuration.
set(PROJECT_NAME "sherpa_onnx_windows")
project(${PROJECT_NAME} LANGUAGES CXX)
# List of absolute paths to libraries that should be bundled with the plugin.
# This list could contain prebuilt libraries, or libraries created by an
# external build triggered from this build file.
set(sherpa_onnx_windows_bundled_libraries
"${CMAKE_CURRENT_SOURCE_DIR}/sherpa-onnx-c-api.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/sherpa-onnx-core.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/kaldi-decoder-core.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/sherpa-onnx-kaldifst-core.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/sherpa-onnx-fstfar.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/sherpa-onnx-fst.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/kaldi-native-fbank-core.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/piper_phonemize.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/espeak-ng.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/ucd.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/onnxruntime.dll"
"${CMAKE_CURRENT_SOURCE_DIR}/ssentencepiece_core.dll"
PARENT_SCOPE
)

View File

@@ -0,0 +1,5 @@
# Introduction
`*.dll` files are generated dynamically using GitHub actions during a new release.
We don't check-in pre-built library files into git.