Test: max y , h w T Ψ ( x , y , h )

Click here to load reader

download Test:  max y , h w T Ψ ( x , y , h )

of 1

  • date post

  • Category


  • view

  • download


Embed Size (px)


h. y. x. Modeling Latent Variable Uncertainty for Loss-based Learning. M. Pawan Kumar Ben Packer Daphne Koller. http:// cvc.centrale-ponts.fr. http:// dags.stanford.edu. Aim: Accurate parameter estimation f rom weakly supervised datasets. Results. - PowerPoint PPT Presentation

Transcript of Test: max y , h w T Ψ ( x , y , h )

PowerPoint Presentation

Test: maxy,h wT(x,y,h)Modeling Latent Variable Uncertainty for Loss-based LearningM. Pawan Kumar Ben Packer Daphne Kollerhttp://cvc.centrale-ponts.frAim: Accurate parameter estimationfrom weakly supervised datasetsLatent Variable ModelsResultsyhttp://dags.stanford.edux : inputy : outputh : latent variables (LV)y = Deer

xhObjectivexyhValues known during trainingValues unknown during trainingPredict the image class y Predict the object location hObject DetectionLatent SVMLinear prediction rule with parameter wTrain: minw i (yi,yi(w),hi(w))Train: max i hi log (P(yi,hi|xi))Test: maxy,h T(x,y,h)The EM AlgorithmP(y,h|x) =exp(T(x,y,h))Z Models uncertainty in LV Does not model accuracy of LV prediction Does not employ a user-defined loss function Employs a user-defined loss function (with restricted form) Does not model uncertainty in LVOverviewhiP(hi|yi,xi)Pw(yi,hi|xi)(yi,hi)(yi(w),hi(w))Two distributions for two tasksModels uncertaintyin LVModels predictedoutput and LVOptimizationBlock coordinate descent over (w,)Minimize Raos Dissimilarity CoefficientP(hi|yi,xi)Pw(yi,hi|xi)min,wi h (yi,h,yi(w),hi(w))P(h|yi,xi)-h,h (yi,h,yi,h)P(h|yi,xi)P(h|yi,xi)Encourages predictionwith correct output andhigh probability LVFix delta distribution; Optimize conditional distributionCase I: Delta distribution predicts correct output, y = y(w)Case II: Delta distribution predicts incorrect output, y y(w)Fix conditional distribution; Optimize delta distributionIncrease the probability of the predicted LV h(w)Increase the diversity of the conditional distributionPredict correct output and high probability LVhi(w)hi(w)(yi,hi(w))(yi,hi(w))Property 1If loss function is independent of h, we recover latent SVMProperty 2If P is modeled as delta, we recover iterative latent SVMCode available at http://cvc.centrale-ponts.fr/personnel/pawanIdeally, the two learned distributions should match exactlyLimited representational power prevents exact matchDifference-of-convex upper bound of expected lossEfficient concave-convex procedure similar to latent SVMObject DetectionActionDetectionStatistically SignificantNot Statistically SignificantStatistically SignificantStatistically Significant

HOG FeaturesNo objectscale variationLatent Space =All possiblepixel positionsPoselet FeaturesLarge objectscale variationLatent Space =Top k persondetectionsKnown ground-truth LV values at test time

: joint feature vector: loss function; measures risk