.. _cs-tf-workflow:
Workflow for TensorFlow on CS
=============================
When you are using TensorFlow targeting Cerebras CS system, start with the high-level workflow described here.
.. raw:: html
.. _port-short:
Port to Cerebras
----------------
Prepare your model and the input function by using ``CerebrasEstimator`` in place of the TensorFlow Estimator.
To run on the CS system you must use ``CerebrasEstimator``. However, when you use ``CerebrasEstimator`` you can also run on a CPU or GPU with minimal code change.
See :ref:`porting-tf-to-cs` to accomplish this step.
.. _prep-input-short:
Prepare input
-------------
To achieve a high input data throughput often requires that your input pipeline is running on multiple worker processes across many CPU nodes at once. Use this :ref:`preparing-tf-input` documentation to organize your input data pipeline.
.. attention::
Wherever you place your model and the associated scripts in your Original Cerebras Support-Cluster, ensure that these are accessible from every node in the cluster.
.. _compile-on-cpu-short:
Compile on CPU
--------------
Before you run it on the CS system, we recommend that you iterate until your model first compiles successfully on a CPU node that has the Cerebras Singularity container client software. To accomplish this step, proceed as follows:
- First run in the ``validate_only`` mode.
- Then run full compile with ``compile_only``.
See this :ref:`validate-and-compile-on-cpu` to perform this step.
.. note::
The :ref:`run-py-template` documents an example template that can help you in organizing your code so these steps can become easy to manage. However, the ``run.py`` described in :ref:`run-py-template` is an example template only. You can organize your code whichever way that best suits you, as long as you use ``CerebrasEstimator``.
.. admonition:: Hardware requirements
Also make sure that the hardware you use for this step satisfies the minimum requirements stated in :ref:`hw-requirements-compile-only`.
.. _run-on-cs-short:
Run the job on CS system
------------------------
Finally, train, eval or predict on the CS system. Use the checkpoints to do evaluation on a GPU or a CPU. See this :ref:`train-eval-predict` for documentation on this step.
..
4. Estimate the performance and the needed resources
Use the ``cs_input_analyzer`` to estimate the performance and Slurm resource settings for your ``input_fn`` and model. See :ref:`cs-input-analyzer` for how to use the ``cs_input_analyzer`` tool.