ml: Implement model-specific metric reports to UMA

In the previous code, the metrics of each type of system resource
usage (RAM, cpu time etc.) are aggregated over all the models. So one
cannot see the resource usage of a particular model. In this CL, we
make such metrics and reports (to UMA) model-specific.

For example, assume there are two models with |metric_base_name|
"SmartDimModel" and "TestModel", respectively. In logging the cpu
usage of model loading, the preivous code will only generate one
histogram on UMA:

    MachineLearningService.LoadModelResult.CpuTimeMicrosec

which contains metrics of both "SmartDimModel" and "TestModel". Now
there will be two histograms,

 MachineLearningService.SmartDimModel.LoadModelResult.CpuTimeMicrosec
 MachineLearningService.TestModel.LoadModelResult.CpuTimeMicrosec

which record the metrics for each model separately.

This is achieved by requiring model owners to specify a
metrics_model_name in their ModelMetadata.

BUG=chromium:924709
TEST=test on device
TEST=histogram names of TestModel are correct
TEST=histogram name of model specification error is correct

Change-Id: Id55b9d0385ec80e0606fa4d663dbd2bcc9c24cf3
Reviewed-on: https://chromium-review.googlesource.com/1710251
Tested-by: Honglin Yu <honglinyu@chromium.org>
Commit-Ready: Honglin Yu <honglinyu@chromium.org>
Legacy-Commit-Queue: Commit Bot <commit-bot@chromium.org>
Reviewed-by: Andrew Moylan <amoylan@chromium.org>
diff --git a/ml/graph_executor_impl.cc b/ml/graph_executor_impl.cc
index 0f5fc5c..e1834ea 100644
--- a/ml/graph_executor_impl.cc
+++ b/ml/graph_executor_impl.cc
@@ -24,7 +24,7 @@
 using ::chromeos::machine_learning::mojom::ValueList;
 
 // Base name for UMA metrics related to graph execution
-constexpr char kMetricsNameBase[] = "ExecuteResult";
+constexpr char kMetricsRequestName[] = "ExecuteResult";
 
 // Verifies |tensor| is valid (i.e. is of type |TensorType| and of the correct
 // shape for this input) and copies its data into the graph |interpreter| at
@@ -155,11 +155,13 @@
     const std::map<std::string, int>& required_inputs,
     const std::map<std::string, int>& required_outputs,
     std::unique_ptr<tflite::Interpreter> interpreter,
-    GraphExecutorRequest request)
+    GraphExecutorRequest request,
+    const std::string& metrics_model_name)
     : required_inputs_(required_inputs),
       required_outputs_(required_outputs),
       interpreter_(std::move(interpreter)),
-      binding_(this, std::move(request)) {}
+      binding_(this, std::move(request)),
+      metrics_model_name_(metrics_model_name) {}
 
 void GraphExecutorImpl::set_connection_error_handler(
     base::Closure connection_error_handler) {
@@ -170,8 +172,10 @@
     std::unordered_map<std::string, TensorPtr> tensors,
     const std::vector<std::string>& outputs,
     const ExecuteCallback& callback) {
+  DCHECK(!metrics_model_name_.empty());
 
-  RequestMetrics<ExecuteResult> request_metrics(kMetricsNameBase);
+  RequestMetrics<ExecuteResult> request_metrics(metrics_model_name_,
+                                                kMetricsRequestName);
   request_metrics.StartRecordingPerformanceMetrics();
 
   // Validate input and output names (before executing graph, for efficiency).