org.apache.mxnet.module

DataParallelExecutorGroup

Related Doc: package module

class DataParallelExecutorGroup extends AnyRef

DataParallelExecutorGroup is a group of executors that lives on a group of devices. This is a helper class used to implement data parallelism. Each mini-batch will be split and run on the devices.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DataParallelExecutorGroup
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  5. def backward(outGrads: Array[NDArray] = null): Unit

    Run backward on all devices.

    Run backward on all devices. A backward should be called after a call to the forward function. Backward cannot be called unless this.for_training is True.

    outGrads

    Gradient on the outputs to be propagated back. This parameter is only needed when bind is called on outputs that are not a loss function.

  6. def bindExec(dataShapes: Seq[DataDesc], labelShapes: Option[Seq[DataDesc]], sharedGroup: Option[DataParallelExecutorGroup], reshape: Boolean = false): Unit

    Bind executors on their respective devices.

    Bind executors on their respective devices.

    dataShapes

    DataDesc for input data.

    labelShapes

    DataDesc for input labels.

    sharedGroup
    reshape

  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def forward(dataBatch: DataBatch, isTrain: Option[Boolean] = None): Unit

    Split dataBatch according to workload and run forward on each devices.

    Split dataBatch according to workload and run forward on each devices.

    dataBatch
    isTrain

    The hint for the backend, indicating whether we are during training phase. Default is None, then the value self.for_training will be used.

  12. def getBatchSize: Int

  13. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  14. def getInputGrads(): IndexedSeq[IndexedSeq[NDArray]]

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be collected from multiple devices. The results will look like grad1_dev2], [grad2_dev1, grad2_dev2, those NDArray might live on different devices.

  15. def getInputGradsMerged(): IndexedSeq[NDArray]

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be merged from multiple devices, as they look like from a single executor. The results will look like [grad1, grad2]

  16. def getOutputShapes: IndexedSeq[(String, Shape)]

  17. def getOutputs(): IndexedSeq[IndexedSeq[NDArray]]

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be collected from multiple devices. The results will look like out1_dev2], [out2_dev1, out2_dev2, those NDArray might live on different devices.

  18. def getOutputsMerged(): IndexedSeq[NDArray]

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be merged from multiple devices, as they look like from a single executor. The results will look like [out1, out2]

  19. def getParams(argParams: Map[String, NDArray], auxParams: Map[String, NDArray]): Unit

    Copy data from each executor to arg_params and aux_params.

    Copy data from each executor to arg_params and aux_params.

    argParams

    target parameter arrays

    auxParams

    target aux arrays Note this function will inplace update the NDArrays in arg_params and aux_params.

  20. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  21. def installMonitor(monitor: Monitor): Unit

  22. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  23. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  24. final def notify(): Unit

    Definition Classes
    AnyRef
  25. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  26. def reshape(dataShapes: Seq[DataDesc], labelShapes: Option[Seq[DataDesc]]): Unit

    Reshape executors.

    Reshape executors.

    dataShapes
    labelShapes

  27. def setParams(argParams: Map[String, NDArray], auxParams: Map[String, NDArray], allowExtra: Boolean = false): Unit

    Assign, i.e.

    Assign, i.e. copy parameters to all the executors.

    argParams

    A dictionary of name to NDArray parameter mapping.

    auxParams

    A dictionary of name to NDArray auxiliary variable mapping.

    allowExtra

    hether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when argParams or auxParams contain extra parameters that is not needed by the executor.

  28. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  29. def toString(): String

    Definition Classes
    AnyRef → Any
  30. def updateMetric(evalMetric: EvalMetric, labels: IndexedSeq[NDArray]): Unit

    Accumulate the performance according to eval_metric on all devices.

    Accumulate the performance according to eval_metric on all devices.

    evalMetric

    The metric used for evaluation.

    labels

    Typically comes from label of a DataBatch.

  31. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped