Writing FFL-enable android application

Prerequisites

You should have good working knowledge of android tools and programming, and have android development tools (e.g. android studio + tools) properly installed on your system. If you are new to the android, this is not the best helloworld sample. Start here: https://developer.android.com.

We recommend to use latest Android Studio, you can download it from https://developer.android.com.

Minimum target Android API level to use with FFL API is 24, e.g. Android 7.0 or newer. While it might work on it, we recommend Android 9 and newer as version 7 is known to have many implementation flaws which are most often fixed with firmware updates.

Install the sample application

Now, having everything ready for android development, jus get et the sample android application, SampleFFLApplication project from NTech, unpack/copy it to the separate folder and open open it with Android Studio or build it directly from command line using gradle. You should know what it is, otherwise check the prerequisites above.

The sample application is filled with informative and contextual comments and source docs, so we recommend to find some time to read thee application sources and generated docs for components referenced here.

Here we explain some details on how to write new android application that uses FFL services based on the sample application.

Static libraries

Be sure to put the copy of static libraries for java and android in the `app/libs' folder (create it if missing), it should contain at least the following:

  • ffl_android-release.aar - android-specific API
  • libfflapi.jar - core FFL APIm portable (android/desktop/server)

You can copy these from the sample application, or somehow else obtain latest version from NTech.

build.gradle

The above libraries should be mentioned in the app/build.gradle as dependencies:

dependencies {

    // ...

    // Android GUI tools for FFL:
    implementation files('libs/ffl_android-release.aar')
    // FFL API:
    implementation files('libs/libfflapi.jar')
    // we need serialization for libfflapi:
    implementation "org.jetbrains.kotlinx:kotlinx-serialization-json:1.1.0"
}

Android permissions

BE sure to require internet permission in android manifest file (src/main/AndroidManifest.xml in our case):

<uses-permission android:name="android.permission.INTERNET" />

Also, if you plan to use non-secure http transport to access FFL service, specify it in the application tag: android:usesCleartextTraffic="true".

The camera require feature request in the manifest file:

<uses-feature
    android:name="android.hardware.camera"
    android:required="true" />

<uses-permission android:name="android.permission.CAMERA" />

then you need the Manifest.permission.CAMERA permission requested and checked from the application before any FFL API for Android usage, so we recommend to put in in the application main activity, see main activity in app/src/main/java/MainActivity.kt: as soon as application is initialized, it requests the camera permission and opens the home activity (in this case, IdentificationActivity.kt).

Configuring access to your FFL server

To access FFL services, you'll need the server URI and account, e.g. login and password. Then the API object could be constructed which provides access to FFL functions. For clarity we put this code into separate file: src/main/java/net/sergeych/samplefflapplication/Service.kt, which only creates the API instance in a separate thread (android doesn't allow synchronous internet access from the UI thread). The api object is constructed by the following code:

val api = FFLApi(
    // use some other storage to save session token in real life
    MemoryKVStorage(),
    serverUrlString,
    fflServerLogin,
    fflServerPassword
)

The api instance will then be used to perform all other operations.

Note that MemoryKVStorage() is used to store service API tokens in RAM only, you can use AndroidKVStorage that will automatically save and reuse FFL service login tokens speeding up FFL connection establishment.

Take note the CompletableFuture approach to get API instance, it also simplifies access to it from UI thread.

See the full documentation to FFL API

The [FFLApi] can work with either FFL Servier directly, or with a transparent proxy as far as it completely mimics proxied server behavior, e.g. is 100% compatible by API calls.

Onboarding

The onboarding procedure allows to register new user record that includes:

  • image of the document with photo (like passport, driving license or ID card)
  • live selfie movie of the same person (same face), with liveness check
  • login and password to access the record.

The fill and up to date documentation is available here.

Onboarding is performed in following steps:

Create onboarding object instance

val onb = FFLOnboarding(api, 0.9)

first argument is an API object instance that was described above. The second argument is the face match threshold, the parameter that tells the system how much the photo on the ID document must resemble the one in selfie movie. Reasonable values lie in 0.75..0.9 range, consult NTEH server software for more information.

Add document photo

The document photo is a .jpg file, it must contain clearly visible face image. To add a document photo:

var result = onb.addDocument(docJpegFile)

where docJpegFile is a string with a valid path to a jpeg file.

If everything is OK, the result should be FFLOnboarding.Result.OK. Other values represent various error conditions and are described here.

If everything is OK, it is possible to add live selfie.

Add selfie video

Note that to add video, you must first successfully add the document. Without it adding video will fail. The selfie video is a short mpeg movie file, preferable obtained using provided android library provided. Otherwise please consult FFL server documentation for current requirements for selfie movie. The movie should be recorded to a file to be processed. When everything is ready, add it to the onboarding instance:

result = onb.addSelfieMovie(mp4File)

where mp4file is a string containing the path to a valid mp4 video of freshly captured registering user face. Note that the face should math one registered in the document from the previous step.

Again, result should be FFLOnboarding.Result.OK. Other values represent various error conditions and are described here.

Create account (login and password)

When you have onboarding instance with successfully added document and selfie, it is possible to finalize onboarding procedure by providing login and password to create user account. Note that it will not work if any of above was not done. To create user record call:

result = onb.createDossier(login,password)

Where login should be unique string. If the login is already in use, it will return pLoginExists](https://kb.universablockchain.com/system/static/fflapi/libfflapi/net.sergeych.libfflapi/-f-f-l-onboarding/-result/-login-exists/index.html) error and this step should be repeated with some other login value.

if result == FFLOnboarding.Result.OK then onboarding is successfully done.

Identification

Identification is a procedure of finding a person based on ones' live selfie video. The person should be first registered with the system using the onboarding procedure described above. The selfie should be saved to a file, preferably, with provided android activity described below. The current set of requirements to the selfie video should match requirements of the server installation used with the library, consult your served documentation for it.

To perform identification:

val result = FFLIdentification(api).perform(selfieMp4File)

Where api is an instance of FFLApi class described above, and sselfoeMptFile is a name of a valid mp4 video file stored locally.

The result value of type [LiveIdentificationResult] either contain a corresponding dossier or information about error. The following code illustrates it:

when (FFLIdentification(api).perform(selfieMp4File)) {
    is LISuccess -> {
        // found
        println("user found: ${it.dossier.name}")
    }
    is LIBadVideo -> {
        println("video is not good, record it again")
    }
    is LINotFound -> {
        println("video is OK, no matching face is found")

    }
    is LIFailed -> {
        println("unexpected error (for example, network failed)")
    }

}

Please consult full documentation of LiveIdentificationResult and FFLIdentification class for most up to date information.

Verification

Verification is very much like identification but requires not only user's live video bu also login and password. To perform verification, first create FFLVerification instance:

val vn = FFLVerification(api, 0.75)

Where api is an FFLApi class instance described above and the 0.75 is a image tolerance threshold representing how close the face on the live video must match one in the registered document, reasonable values are in 0.75 .. 0.9 range. Please consult FFL server documentation for more details on this threshold parameter.

When verification instance is ready, login to get the dossier by user's login and password:

val dossier = vn.login("foo", "bar") ?: run {
     println("Incorrect credentials")
     return "invalid_login"
  }

Here foo and bar are login and password of the user being verified. Login method returns null if the login and password combination is not valid.

After successful login, dossier object is available to perform application-specific checks. Next, the live selfie movie of the user should be recorded and checked:

 when ( vn.verify(slefieMp4File) ) {
     is LIBadVideo -> {
        println("Bad video, error: ${r.code}, please try again")
        // record new video and try again
     }
     is LIFailed -> {
        println("unexpected error: ${r.message}")
        // and try again with the same instance
     }
     LINotFound -> {
        // this means the video does not match the logged in profile  documents:
        println("You aren't the person that owns that login")
        // report failure, record new selfie if need

     }
     is LISuccess -> {
        println("Your personality has been approved.")
        // successful verification, the person is one matching ID documents registered
     }
 }

where `selfieMp4File' is a string with a valid path to a mp4 file containing user's live selfie video. Please see identification section above for this file requirements.

Unsuccessful verification could be retried with the same verification object instance.

Please see also most recent and complete documentation of the FFLVerification class.

Other FFL service actions

The [FFLApi class documentation] contains full documentation, please refer to it if you need somethign else.

FFL android sample: identification

This activity performs face-based identification, the most complex to code procedure as it requires capturing video in a way the FFL service expects it. Though using FFL API for Android it is rather simple task. Lets see it step by step. All is done in our activity that inherits FFL for Android activity VideoActiviry - it is important to derive your activity from it unless you want to capture video and draw viewfinder manually:

class IdentificationActivity : VideoActivity() {

    // ...

Then, properly initialize VieoActivity. All it needs is to know which view you want to put viewfinder video to. We provide the special view type for it AutoFitTextureView, it contains all necessary code to cope with camera and view dimensions, aspect ratio to provide viewfinder-type that works as expected. Here is the sample how to insert this view into activity layout:

 <net.sergeych.ffl_android.AutoFitTextureView
        android:id="@+id/viewFinder"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

Note the view class here and constraints. We recommend to always use it this way: make its height and width to wrap content, and fix it in a proper place with layout_constraint.... In the sample app, see app/src/main/res/layout/activity_identification.xml.

Having the view properly placed, just tell VideoActivity what to use to show view finer video:

    override fun onCreate(savedInstanceState: Bundle?) {
        // ...

        // Before anything else, set parent's mTextureView to the
        // net.sergeych.ffl_android.AutoFitTextureView supplied in the library.
        // note that for the correct work it should have width an libgth set
        // to wrap_content! (see activity_identification.xml)
        mTextureView = binding.viewFinder

        //...
    }

This sample uses Android's binding architecture, but you can use whatever technique you like, like findViewById() or old good kotlin direct bindings.

You can find more information on VideoActivity class in pits online docs](https://kb.universablockchain.com/system/static/ffl_android/ffl_android/net.sergeych.ffl_android/-video-activity/index.html).

The rest of the initialization function just adds button handler that do all the FFL magick, in this case, live face based identification, in the doIdentification() method:

    fun doIdentification() {
        binding.bnRecord.isEnabled = false
        showMessage("recording...")
        recordSelfieVideo(true) { file ->
            showMessage("analysing...")
            // now we are in the UI thread having a recorded video file, we should start
            // identification process in the background thread, as it is long and does network
            // operation:
            inBackground {
                try {
                    val identification = FFLIdentification(Service.api)
                    identification.perform(file)
                }
                catch(x: Exception) {
                    // exception here means someting really weird like network disconnected
                    // before an API connection is established, so we just return proper result
                    LIFailed(x.toString())
                }
            } ui {
                // back in UI thread to show results:
                val message = when(it) {
                    is LIBadVideo -> "Video is not good: ${it.code}"
                    LINotFound -> "Your face has not been found"
                    is LIFailed -> "Failure: ${it.message}"
                    is LISuccess -> "Found: ${it.dossier.name}"
                }
                showMessage(message)
                binding.bnRecord.isEnabled = true
            }
        }
    }

lets see what ot does in more details:

recordSelfieVideo(true): record short video of the live user face, using configured earlier viewfinder view. The parameter true means that activity should not freeze viewfinder after recording is complete. This behavior though otherwise good and expected by the user can freeze some Xaomi smartphones to freeze because of the bug in their camera driver code.

See VideoActivity documentation for more data on how else you can get the selfie movie.

When the selfie movie is recorded, it creates FFLIdentification instance and performs identification:

inBackground {
    try {
        val identification = FFLIdentification(Service.api)
        identification.perform(file)
    }
    catch(x: Exception) {
        // exception here means someting really weird like network disconnected
        // before an API connection is established, so we just return proper result
        LIFailed(x.toString())
    }
} ui {
    // back in UI thread to show results:
    val message = when(it) {
        is LIBadVideo -> "Video is not good: ${it.code}"
        LINotFound -> "Your face has not been found"
        is LIFailed -> "Failure: ${it.message}"
        is LISuccess -> "Found: ${it.dossier.name}"
    }
}

Note usage of the inBackground {} ui {} helper, it is a part of android API library that greatly simplifies access to background threads from the main UI thread and back. It executes a block of slow and/or blocking operations processing live face identification in background, then pass the result in the ui {} block executed in the UI thread. See online documentation for more.

Now, in ui {} block we check the result of the identification performed and create the right message to show to the user.

Online docs

The information about on API libraries is available here: