Regarding this example:
> F:= c -> Int(3*c*x^2 + c^2*x, x = 0 .. 1);
Error, (in Optimization:-NLPSolve) complex value encountered
There are a couple of issues here. First, there is a check in NLPSolve for type(..., complex). This is my mistake; the type-check should be for complex(numeric), as F(0.), for example, satisfies type/complex.
The other problem is that, while the Optimization routines apply evalf to the objective function procedure if an initial attempt with evalhf computation fails, they currently don't do this with gradient and Jacobian procedures generated through the codegen automatic differentiation package. The error actually occurs in evaluation of the gradient of F. If you try the above call with method=nonlinearsimplex, a derivative-free method, it will succeed. The suggestion from acer to supply your own gradient will help in this case; it's generally a good idea anyhow, as it often leads to better performance.
I will add these notes to the bug report Robert had submitted, and we'll look at improving the situation in a future release. In the meantime, a workaround is to ensure the procedure representing the objective function always returns a numerical value, as Robert and acer mention above. Indeed, the Optimization help pages say that the objective function, given as a procedure, should accept floating-point arguments and return a floating-point value. The Optimization commands are "forgiving" to some extent, for ease of use, but ensuring numeric input and output leads to the most efficient computation.