russtedrake PRO
Roboticist at MIT and TRI
MIT 6.4210/2:
Robotic Manipulation
Fall 2022, Lecture 7
Follow live at https://slides.com/d/EyGIlFg/live
(or later at https://slides.com/russtedrake/fall22-lec07)
prog = MathematicalProgram()
# x is a symbolic variable
x = prog.NewContinuousVariables(2)
# numerical, pass in the coefficients:
# x'Qx + b'x + c
prog.AddQuadraticCost(Q, b, c, x)
# symbolic:
prog.AddCost(x.dot(x))
prog.AddQuadraticCost(x.dot(x))
# autodiff:
def my_cost(x):
return x.dot(x)
prog.AddCost(my_cost, x)
# Adding autodiff costs/constraints => Solve(prog) will
# pick a nonlinear optimization solver
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove
prog = MathematicalProgram()
p = prog.NewContinuousVariables(2, 'p')
theta = prog.NewContinuousVariables(1, 'theta')
def position_model_in_world(vars, i):
[p, theta] = np.split(vars, [2])
R = np.array([[np.cos(theta[0]), -np.sin(theta[0])],
[np.sin(theta[0]), np.cos(theta[0])]])
p_Wmci = p + R @ p_Omc[:,i]
return p_Wmci
def squared_distance(vars, i):
p_Wmci = position_model_in_world(vars, i)
err = p_Wmci - p_s[:,i]
return err.dot(err)
for i in range(Ns):
# forall i, |p + R*p_Omi - p_si|²
prog.AddCost(partial(squared_distance, i=i),
np.concatenate([p[:], theta]))
# forall i, p + R*p_Omi >= 0.
prog.AddConstraint(partial(position_model_in_world, i=i),
vars=np.concatenate([p[:], theta]),
lb=[0, 0], ub=[np.inf, np.inf])
result = Solve(prog)
By russtedrake
MIT Robotic Manipulation Fall 2020 http://manipulation.csail.mit.edu