My Math Forum  

Go Back   My Math Forum > College Math Forum > Calculus

Calculus Calculus Math Forum


Thanks Tree2Thanks
Reply
 
LinkBack Thread Tools Display Modes
September 4th, 2018, 07:09 PM   #1
Newbie
 
Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Optimization with 2 variables in the objective function and 3 variables in constraint

Can anyone recommend me a book or pdf document where it's explained how to solve constrained optimization problems with n independent variables in the objective function and n+k independent variables in the constraints (I mean, after rearranging the constraints, substituting variables, etc)?

Last edited by skipjack; September 4th, 2018 at 10:05 PM.
Carlos2007 is offline  
 
September 4th, 2018, 07:22 PM   #2
Senior Member
 
Joined: Feb 2016
From: Australia

Posts: 1,734
Thanks: 605

Math Focus: Yet to find out.
Is it a convex problem?
Joppy is offline  
September 4th, 2018, 07:35 PM   #3
Newbie
 
Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Very much thank you for your quick reply.

Quote:
Originally Posted by Joppy View Post
Is it a convex problem?
Yes, let's say a problem like this:

max = x*y

s.t: M = x + y + z

where M is a constant here and the independent variables are x, y and z. Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" (what I mean by this non 0 or non unpractical or unrealistic solutions) under certain circumstances. So, I want to read and learn about these specific kinds of problems.

Last edited by skipjack; September 4th, 2018 at 10:04 PM.
Carlos2007 is offline  
September 4th, 2018, 07:38 PM   #4
Senior Member
 
Joined: Feb 2016
From: Australia

Posts: 1,734
Thanks: 605

Math Focus: Yet to find out.
Stephen Boyd's book on convex optimization might be a good start. Fortunately, it's free.

Last edited by skipjack; September 4th, 2018 at 10:07 PM.
Joppy is offline  
September 5th, 2018, 12:31 PM   #5
Senior Member
 
Joined: Mar 2015
From: New Jersey

Posts: 1,603
Thanks: 115

Quote:
Originally Posted by Carlos2007 View Post
max = x*y
s.t: M = x + y + z
where M is a constant here and the independent variables are x, y and z.
w=xy=x(M-x-z)
dw=(M-2x-z)dz-zdx
w is a max when dw =0 no matter what dx and dz are. ->
z=0, x=M/2, y=M/2
wmax=M^2/2

Or you could use Lagranges variables which you don't need here because you can solve the constraint for one of the variables directly.
zylo is offline  
September 5th, 2018, 05:07 PM   #6
Newbie
 
Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Quote:
Originally Posted by Carlos2007 View Post
Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" under certain circumstances (what I mean by this non 0 or non unpractical or unrealistic solutions). So, I want to read and learn about these specific kinds of problems.
.

Dear zylo, very much thank you for your explanation. As I said, that example I wrote is just hypothetical. What I want is to learn how to proceed and get practical skills (I want to solve some exercises) in cases where when optimizing a function of n independent variables and n+k independent variables involved in the constraints (of equality or inequality) and after applying the Lagrangian method, at least, all the extra k variables involved in the constraints are not equal 0.

In reality, for now, I just need to work with the case of n and n+1 variables, Another professor in the forum suggested me a book and it seems to be quite complicated for my level in principle. This weekend I will read it, but it wold be great if you know another resource (pdf or book) where the writer solves this kind of problem and /or there is a problem set with solutions (a book for example)
Carlos2007 is offline  
September 5th, 2018, 06:59 PM   #7
Newbie
 
Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Quote:
Originally Posted by zylo View Post
w=xy=x(M-x-z)
dw=(M-2x-z)dz-zdx
w is a max when dw =0 no matter what dx and dz are. ->
z=0, x=M/2, y=M/2
wmax=M^2/2

Or you could use Lagranges variables which you don't need here because you can solve the constraint for one of the variables directly.
Dear zylo, very much thank you for your quick response. As I wrote in reply to other professor

Quote:
Originally Posted by Carlos2007 View Post
Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" (what I mean by this non 0 or non unpractical or unrealistic solutions) under certain circumstances. So, I want to read and learn about these specific kinds of problems.
What I want is to learn how to solve this problems of n independent variables in the objective function and n+k independent variables in the constraints (I think, for now it is enough to know about the case of n and n+1). Another professor in this thread kindly suggested me a book. I briefly took a look at it and it seems to be kind of complicated for me and the exercises seems to be quite theoretical (my fault, of course). But I will read the book this weekend.

However, If you know a book or a pdf where a problem like the one I used as example is solved, it would be great if you let me know.
Carlos2007 is offline  
September 6th, 2018, 05:58 AM   #8
Senior Member
 
Joined: May 2016
From: USA

Posts: 1,192
Thanks: 489

Carlos

As I told you at the other site, there is no special technique involved. Assuming that the objective function and the relevant constraints are all differentiable, you set up the Lagrangian with the objective function and a Langrangian multiplier for each constraint. You take the partial differentials, set them to zero, and solve the resulting system of equations. I even partially worked out an example for you. Any text on Lagrangian constrained optimization will give you what you need, provided that you are working with differentiable functions.

Why do you keep asking the same question?
JeffM1 is offline  
September 6th, 2018, 07:57 AM   #9
Senior Member
 
Joined: Mar 2015
From: New Jersey

Posts: 1,603
Thanks: 115

More generally,

Maximize w=f(x,y) subject to g(x,y,u,v)=0

g(x,y,u,v)=0 -> y=y(x,u,v) -> w=w(x,y(x,u,v))=F(x,u,v)

A possible max for w occurs when dw=0 for arbitrary dx,du,dv
which is the case if Fx=Fu=Fw=0 which are three equations in the three unknowns x,u,v, and then y=y(x,u,v).

I find a meaningless, verbose, obtuse, recitation of Lagranges multipliers, which anyone can look up, to be totally useless in this non-typical situation.
zylo is offline  
September 6th, 2018, 12:23 PM   #10
Senior Member
 
Joined: Mar 2015
From: New Jersey

Posts: 1,603
Thanks: 115

As far as I can determine, Lagranges equations only apply when the function to be maximized and the constraints contain the same variables.

For ex,
Maximize f(x,y,u,v) subject to g(x,y,u,v)=0 and h(x,y,u,v)=0

If you know otherwise, please demonstrate with OP example:
Maximize w=xy subject to x+y+z-M=0
zylo is offline  
Reply

  My Math Forum > College Math Forum > Calculus

Tags
constraint, function, objective, optimization, variables



Thread Tools
Display Modes


Similar Threads
Thread Thread Starter Forum Replies Last Post
statics class, ended up with 2 equations and 2 variables, rules to isolate variables fixxit Algebra 4 September 10th, 2014 12:29 PM
Quadratic Optimization in multiple variables (closed form) variatnik Abstract Algebra 1 August 11th, 2012 10:56 PM
Function of Several Variables emath Math Books 5 July 27th, 2012 01:51 PM
Optimization of a Matrix with one constraint. shack Linear Algebra 3 December 17th, 2007 02:11 PM
Function of Several Variables emath Calculus 0 December 31st, 1969 04:00 PM





Copyright © 2018 My Math Forum. All rights reserved.