Add a new Archiving stage to the scheduler, which runs after Parsing.  This stage is responsible for copying results to the results server in a drone setup, a task currently performed directly by the scheduler, and allows for site-specific archiving functionality, replacing the site_parse functionality.  It does this by running autoserv with a special control file (scheduler/archive_results.control.srv), which loads and runs code from the new scheduler.archive_results module.  The implementation was mostly straightfoward, as the archiving stage is fully analogous to the parser stage.  I did make a couple of refactorings:
* factored out the parser throttling code into a common superclass that the ArchiveResultsTask could share
* added some generic flags to Autoserv to duplicate special-case functionality we'd added for the --collect-crashinfo option -- namely, specifying a different pidfile name and specifying that autoserv should allow (and even expect) an existing results directory.  in the future, i think it'd be more elegant to make crashinfo collection run using a special control file (as archiving works), rather than a hard-coded command-line option.
* moved call to server_job.init_parser() out of the constructor, since this was an easy source of exceptions that wouldn't get logged.

Note I believe some of the functional test changes slipped into my previous change there, which is why that looks smaller than you'd expect.

Signed-off-by: Steve Howard <showard@google.com>

==== (deleted) //depot/google_vendor_src_branch/autotest/tko/site_parse.py ====


git-svn-id: http://test.kernel.org/svn/autotest/trunk@4070 592f7852-d20e-0410-864c-8624ca9c26a4
diff --git a/server/server_job.py b/server/server_job.py
index 2b8ef6e..97a6b6d 100755
--- a/server/server_job.py
+++ b/server/server_job.py
@@ -136,11 +136,7 @@
             utils.write_keyval(self.resultdir, job_data)
 
         self._parse_job = parse_job
-        if self._parse_job and len(machines) == 1:
-            self._using_parser = True
-            self.init_parser(self.resultdir)
-        else:
-            self._using_parser = False
+        self._using_parser = (self._parse_job and len(machines) == 1)
         self.pkgmgr = packages.PackageManager(
             self.autodir, run_function_dargs={'timeout':600})
         self.num_tests_run = 0
@@ -207,20 +203,22 @@
         subcommand.subcommand.register_join_hook(on_join)
 
 
-    def init_parser(self, resultdir):
+    def init_parser(self):
         """
-        Start the continuous parsing of resultdir. This sets up
+        Start the continuous parsing of self.resultdir. This sets up
         the database connection and inserts the basic job object into
         the database if necessary.
         """
+        if not self._using_parser:
+            return
         # redirect parser debugging to .parse.log
-        parse_log = os.path.join(resultdir, '.parse.log')
+        parse_log = os.path.join(self.resultdir, '.parse.log')
         parse_log = open(parse_log, 'w', 0)
         tko_utils.redirect_parser_debugging(parse_log)
         # create a job model object and set up the db
         self.results_db = tko_db.db(autocommit=True)
         self.parser = status_lib.parser(self._STATUS_VERSION)
-        self.job_model = self.parser.make_job(resultdir)
+        self.job_model = self.parser.make_job(self.resultdir)
         self.parser.start(self.job_model)
         # check if a job already exists in the db and insert it if
         # it does not
@@ -317,7 +315,7 @@
                 os.chdir(self.resultdir)
                 self.in_machine_dir = True
                 utils.write_keyval(self.resultdir, {"hostname": machine})
-                self.init_parser(self.resultdir)
+                self.init_parser()
                 result = function(machine)
                 self.cleanup_parser()
                 return result