My Math Forum Optimization with 2 variables in the objective function and 3 variables in constraint

 Calculus Calculus Math Forum

 September 4th, 2018, 06:09 PM #1 Newbie   Joined: Sep 2018 From: Costa Rica Posts: 4 Thanks: 0 Optimization with 2 variables in the objective function and 3 variables in constraint Can anyone recommend me a book or pdf document where it's explained how to solve constrained optimization problems with n independent variables in the objective function and n+k independent variables in the constraints (I mean, after rearranging the constraints, substituting variables, etc)? Last edited by skipjack; September 4th, 2018 at 09:05 PM.
 September 4th, 2018, 06:22 PM #2 Senior Member   Joined: Feb 2016 From: Australia Posts: 1,801 Thanks: 636 Math Focus: Yet to find out. Is it a convex problem?
September 4th, 2018, 06:35 PM   #3
Newbie

Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Quote:
 Originally Posted by Joppy Is it a convex problem?
Yes, let's say a problem like this:

max = x*y

s.t: M = x + y + z

where M is a constant here and the independent variables are x, y and z. Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" (what I mean by this non 0 or non unpractical or unrealistic solutions) under certain circumstances. So, I want to read and learn about these specific kinds of problems.

Last edited by skipjack; September 4th, 2018 at 09:04 PM.

 September 4th, 2018, 06:38 PM #4 Senior Member   Joined: Feb 2016 From: Australia Posts: 1,801 Thanks: 636 Math Focus: Yet to find out. Stephen Boyd's book on convex optimization might be a good start. Fortunately, it's free. Last edited by skipjack; September 4th, 2018 at 09:07 PM.
September 5th, 2018, 11:31 AM   #5
Banned Camp

Joined: Mar 2015
From: New Jersey

Posts: 1,720
Thanks: 125

Quote:
 Originally Posted by Carlos2007 max = x*y s.t: M = x + y + z where M is a constant here and the independent variables are x, y and z.
w=xy=x(M-x-z)
dw=(M-2x-z)dz-zdx
w is a max when dw =0 no matter what dx and dz are. ->
z=0, x=M/2, y=M/2
wmax=M^2/2

Or you could use Lagranges variables which you don't need here because you can solve the constraint for one of the variables directly.

September 5th, 2018, 04:07 PM   #6
Newbie

Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Quote:
 Originally Posted by Carlos2007 Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" under certain circumstances (what I mean by this non 0 or non unpractical or unrealistic solutions). So, I want to read and learn about these specific kinds of problems.
.

Dear zylo, very much thank you for your explanation. As I said, that example I wrote is just hypothetical. What I want is to learn how to proceed and get practical skills (I want to solve some exercises) in cases where when optimizing a function of n independent variables and n+k independent variables involved in the constraints (of equality or inequality) and after applying the Lagrangian method, at least, all the extra k variables involved in the constraints are not equal 0.

In reality, for now, I just need to work with the case of n and n+1 variables, Another professor in the forum suggested me a book and it seems to be quite complicated for my level in principle. This weekend I will read it, but it wold be great if you know another resource (pdf or book) where the writer solves this kind of problem and /or there is a problem set with solutions (a book for example)

September 5th, 2018, 05:59 PM   #7
Newbie

Joined: Sep 2018
From: Costa Rica

Posts: 4
Thanks: 0

Quote:
 Originally Posted by zylo w=xy=x(M-x-z) dw=(M-2x-z)dz-zdx w is a max when dw =0 no matter what dx and dz are. -> z=0, x=M/2, y=M/2 wmax=M^2/2 Or you could use Lagranges variables which you don't need here because you can solve the constraint for one of the variables directly.
Dear zylo, very much thank you for your quick response. As I wrote in reply to other professor

Quote:
 Originally Posted by Carlos2007 Of course, this case is hypothetical and the solution is obvious. But I am interested in cases where z is not equal 0. I am interested in cases where z can have another value. I guess this kind of problem can have "optimal solutions" (what I mean by this non 0 or non unpractical or unrealistic solutions) under certain circumstances. So, I want to read and learn about these specific kinds of problems.
What I want is to learn how to solve this problems of n independent variables in the objective function and n+k independent variables in the constraints (I think, for now it is enough to know about the case of n and n+1). Another professor in this thread kindly suggested me a book. I briefly took a look at it and it seems to be kind of complicated for me and the exercises seems to be quite theoretical (my fault, of course). But I will read the book this weekend.

However, If you know a book or a pdf where a problem like the one I used as example is solved, it would be great if you let me know.

 September 6th, 2018, 04:58 AM #8 Senior Member   Joined: May 2016 From: USA Posts: 1,310 Thanks: 551 Carlos As I told you at the other site, there is no special technique involved. Assuming that the objective function and the relevant constraints are all differentiable, you set up the Lagrangian with the objective function and a Langrangian multiplier for each constraint. You take the partial differentials, set them to zero, and solve the resulting system of equations. I even partially worked out an example for you. Any text on Lagrangian constrained optimization will give you what you need, provided that you are working with differentiable functions. Why do you keep asking the same question?
 September 6th, 2018, 06:57 AM #9 Banned Camp   Joined: Mar 2015 From: New Jersey Posts: 1,720 Thanks: 125 More generally, Maximize w=f(x,y) subject to g(x,y,u,v)=0 g(x,y,u,v)=0 -> y=y(x,u,v) -> w=w(x,y(x,u,v))=F(x,u,v) A possible max for w occurs when dw=0 for arbitrary dx,du,dv which is the case if Fx=Fu=Fw=0 which are three equations in the three unknowns x,u,v, and then y=y(x,u,v). I find a meaningless, verbose, obtuse, recitation of Lagranges multipliers, which anyone can look up, to be totally useless in this non-typical situation.
 September 6th, 2018, 11:23 AM #10 Banned Camp   Joined: Mar 2015 From: New Jersey Posts: 1,720 Thanks: 125 As far as I can determine, Lagranges equations only apply when the function to be maximized and the constraints contain the same variables. For ex, Maximize f(x,y,u,v) subject to g(x,y,u,v)=0 and h(x,y,u,v)=0 If you know otherwise, please demonstrate with OP example: Maximize w=xy subject to x+y+z-M=0

 Thread Tools Display Modes Linear Mode

 Similar Threads Thread Thread Starter Forum Replies Last Post fixxit Algebra 4 September 10th, 2014 11:29 AM variatnik Abstract Algebra 1 August 11th, 2012 09:56 PM emath Math Books 5 July 27th, 2012 12:51 PM shack Linear Algebra 3 December 17th, 2007 01:11 PM emath Calculus 0 December 31st, 1969 04:00 PM

 Contact - Home - Forums - Cryptocurrency Forum - Top