我正在开发一个应用程序,它需要在屏幕关闭后继续朗读文本.为了实现这个目标,我将文本到语音(TTS)代码放在前台服务中,这样当屏幕关闭时,TTS可以继续运行.

我以前在手机上用得很好.但在我将手机从安卓11升级到安卓12后,TTS在屏幕关闭一段时间后停止工作,通常是几分钟后.

通常,在TTS讲完一个句子后,它会调用UtteranceProgressListeneronDone方法,这样我就可以让TTS讲下一个句子了.TTS停止工作的原因是在屏幕关闭一段时间后,onDone方法停止被调用.它不会立即停止,但会在几分钟后停止,有时更长,有时更短.

我猜新安卓操作系统的电池优化导致了这个问题.但在我关闭系统电池优化后,它也不起作用.我还注意到一些类似的应用程序也有同样的问题,但有些应用程序没有.我该如何解决这个问题?

推荐答案

这段代码在Android 12上运行,即使应用程序是后台的

类TTS:Service(),OnInitListener

private var tts: TextToSpeech? = null
private lateinit var spokenText: String
private var isInit: Boolean = false

override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
    if(intent?.extras != null) {
        spokenText = intent.getStringExtra("text").toString()
    }
    else {
        spokenText = ""
    }
    Log.d(TAG, "onStartCommand: $spokenText")
    return START_NOT_STICKY
}

override fun onCreate() {
    tts = TextToSpeech(this, this)
    Log.d(TAG, "onCreate: CREATING AGAIN !!")
}

override fun onInit(status: Int) {
    if (status == TextToSpeech.SUCCESS) {
        Log.d(TAG, "onInit: TextToSpeech Success")
        val result = tts!!.setLanguage(Locale("hi", "IN"))
        if (result != TextToSpeech.LANG_MISSING_DATA && result != TextToSpeech.LANG_NOT_SUPPORTED) {
            Log.d(TAG, "onInit: speaking........")
            addAudioAttributes()
            isInit = true
        }
    }
    else {
        Log.d(TAG, "onInit: TTS initialization failed")
        Toast.makeText(
            applicationContext,
            "Your device don't support text to speech.\n Visit app to download!!",
            Toast.LENGTH_SHORT
        ).show()
    }
}

private fun addAudioAttributes() {
    val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
        val audioAttributes = AudioAttributes.Builder()
            .setUsage(AudioAttributes.USAGE_MEDIA)
            .setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
            .build()
        tts?.setAudioAttributes(audioAttributes)
    }

    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
        val focusRequest =
            AudioFocusRequest.Builder(AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK)
                .setAudioAttributes(
                    AudioAttributes.Builder()
                        .setUsage(AudioAttributes.USAGE_MEDIA)
                        .setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
                        .build()
                )
                .setAcceptsDelayedFocusGain(true)
                .setOnAudioFocusChangeListener { focus ->
                    when (focus) {
                        AudioManager.AUDIOFOCUS_GAIN -> {
                        }
                        else -> stopSelf()
                    }
                }.build()

        when (audioManager.requestAudioFocus(focusRequest)) {
            AudioManager.AUDIOFOCUS_REQUEST_GRANTED -> speak(audioManager, focusRequest)
            AudioManager.AUDIOFOCUS_REQUEST_DELAYED -> stopSelf()
            AudioManager.AUDIOFOCUS_REQUEST_FAILED -> stopSelf()
        }

    } else {
        val result = audioManager.requestAudioFocus( { focusChange: Int ->
            when(focusChange) {
                AudioManager.AUDIOFOCUS_GAIN -> {
                }
                else -> stopSelf()
            }
        },
            AudioManager.STREAM_MUSIC,
            AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK
        )

        if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
            speak(audioManager, null)
        }
    }
}

private fun speak(audioManager: AudioManager, focusRequest: AudioFocusRequest?) {
    val speechListener = object : UtteranceProgressListener() {
        override fun onStart(utteranceId: String?) {
            Log.d(TAG, "onStart: Started syntheses.....")
        }

        override fun onDone(utteranceId: String?) {
            Log.d(TAG, "onDone: Completed synthesis ")
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O && focusRequest != null) {
                audioManager.abandonAudioFocusRequest(focusRequest)
            }
            stopSelf()
        }

        override fun onError(utteranceId: String?) {
            Log.d(TAG, "onError: Error synthesis")
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O && focusRequest != null) {
                audioManager.abandonAudioFocusRequest(focusRequest)
            }
            stopSelf()
        }
    }
    val paramsMap: HashMap<String, String> = HashMap()
    paramsMap[TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID] = "tts_service"

    tts?.speak(spokenText, TextToSpeech.QUEUE_ADD, paramsMap)
    tts?.setOnUtteranceProgressListener(speechListener)
}

override fun onDestroy() {
    if (tts != null) {
        Log.d(TAG, "onDestroy: destroyed tts")
        tts?.stop()
        tts?.shutdown()
    }
    super.onDestroy()
}

override fun onBind(arg0: Intent?): IBinder? {
    return null
}

companion object {
    private const val TAG = "TTS_Service"
}

}

Android相关问答推荐

未解析的引用:背景 colored颜色

编写视觉转型

我遇到了一个HashMaps对象没有存储在Firebase数据库中的问题.HashMap的一个对象put方法未被存储

如何在Android Emulator上从物理设备接收TCP消息

无法安装后重新编译android代码'

从安卓S原生库的资源中读取json文件

使用 async 向网络发出并行请求并在supervisorScope中处理它们

Android 构建失败:找不到 flexbox2.0.1.aar

更改当前活动并返回后,Android webview 滚动不起作用

具有管理员权限的 Kotlin 中的多用户系统

根据另一个数组的值对数组进行排序

设置背景图片组成Column

为什么集成React Native时compileSdkVersion错误?

React-native 3D对象渲染

如何在 BottomBar jetpack compose 中删除选定的椭圆项目 colored颜色

Kotlin Coroutines 会取代 AsyncTask 吗?

Kotlin:如何在另一个变量的名称中插入一个变量的值

线圈单元测试 - 如何做到这一点?

房间实时数据:启动时崩溃

等到上一个事件完成 Rx