GitScrum / Docs
Toutes les Bonnes Pratiques

Projets AI/ML Ops | GitScrum

Gérez projets MLOps avec GitScrum. Suivez développement modèles, coordonnez déploiements et monitorer ML en production. +50% rapidité.

5 min de lecture

How to use GitScrum for AI/ML Ops projects?

Manage MLOps in GitScrum with model-specific labels, track experiment-to-production pipeline, and document model decisions in NoteVault. Coordinate data, training, and deployment work. MLOps teams with structured workflow deploy models 50% faster [Source: MLOps Research 2024].

MLOps workflow:

  • Data - Data preparation
  • Experiment - Model development
  • Evaluate - Model validation
  • Package - Model packaging
  • Deploy - Model serving
  • Monitor - Production monitoring
  • Retrain - Model updates
  • MLOps labels

    LabelPurpose
    type-mlopsMLOps work
    ml-dataData pipeline
    ml-trainingModel training
    ml-experimentExperiment
    ml-deploymentModel deployment
    ml-monitoringProduction monitoring
    ml-retrainingModel update

    MLOps columns

    ColumnPurpose
    BacklogPlanned work
    Data PrepData tasks
    ExperimentActive experiments
    EvaluationModel validation
    DeploymentProduction release
    MonitoringLive models

    NoteVault MLOps documentation

    DocumentContent
    Model catalogAll models
    Experiment logExperiments tried
    Deployment runbookHow to deploy
    Monitoring guideWhat to watch
    Retraining criteriaWhen to update

    Experiment task template

    ## Experiment: [name]
    
    ### Hypothesis
    If we [change], then [metric] will improve because [reason].
    
    ### Data
    - Dataset: [name/version]
    - Split: train/val/test
    - Features: [key features]
    
    ### Model
    - Architecture: [description]
    - Hyperparameters: [key params]
    
    ### Results
    - Primary metric: [value]
    - Secondary metrics: [values]
    - Comparison to baseline: [%]
    
    ### Outcome
    [ ] Promote to production
    [ ] Iterate
    [ ] Abandon
    
    ### Notes
    [Key learnings]
    

    Model deployment task

    ## Deploy Model: [name] v[X]
    
    ### Model Info
    - Version: [version]
    - Experiment: [link]
    - Performance: [metrics]
    
    ### Deployment
    - Environment: [prod/staging]
    - Serving: [method]
    - Resources: [CPU/GPU/memory]
    
    ### Checklist
    - [ ] Model validated
    - [ ] A/B test configured
    - [ ] Monitoring setup
    - [ ] Rollback tested
    - [ ] Alerts configured
    
    ### Rollout
    - [ ] 5% traffic
    - [ ] 25% traffic
    - [ ] 50% traffic
    - [ ] 100% traffic
    
    ### Rollback Plan
    [How to revert]
    

    Data pipeline tasks

    Task TypePurpose
    Data collectionNew data sources
    Data validationQuality checks
    Feature engineeringNew features
    Data versioningDataset versions

    Training pipeline

    StageTasks
    PreprocessingData prep
    TrainingModel training
    ValidationHold-out testing
    HyperparameterTuning experiments

    Model monitoring

    MetricAlert
    Prediction latency> threshold
    Error rate> baseline
    Data driftDistribution change
    Model driftPerformance decay

    Retraining triggers

    TriggerAction
    Performance decayScheduled retraining
    New dataIncremental training
    Data driftModel update
    ManualAd-hoc retraining

    MLOps metrics

    MetricTrack
    Experiment velocityExperiments per week
    Deployment frequencyModels per month
    Model performanceBusiness metrics
    Incident rateModel failures

    Common MLOps issues

    IssueSolution
    Slow experimentsParallelization
    Deployment failuresStaging testing
    Model driftMonitoring
    Data qualityValidation pipeline

    Model versioning

    ElementTrack
    Model artifactsModel files
    Training dataDataset version
    CodeGit commit
    ConfigHyperparameters

    Related articles