17 years, 5 days

## Thanks JacquesC. Yes that's...

Thanks JacquesC. Yes that's the Gibb's phenomenon. The only reason we use FFT is that it is fast, because we need to process large data sets. The reason that we want to remove the overshooting and undershooting and rippling is that we need to integrate the FFT result against another smooth function at a later stage. Although it sounds difficult or even impossible to remove the Gibbs phenomenon(without increasing number of sample points by large margin), are there tricks we can use to mitigate the Gibbs phenomenon at the integration stage(the later stage)? For example, we know the exact location and the magnitude of the jump. Is there a way to downplay this erroneous position in the numerical integration stage? In Riemann integration, probably this is not easy... I don't know... Also, if there is someway to work around the 1/v problem in my trick(subtract 1/v first and then add it back later manually), that would be nice. The only trouble it has is the fact 1/v fails at v=0. If I can design a function which has the same 1/v behavior asymptotically, and at v=0 it doesn't fail, and it has a closed-form Fourier transform, then we can still do the trick, right? Yes, you are right, there is explicit solution for the simple example I gave. However, my real problem is much complicated and it is not even easy to check if it does have a closed-form Fourier transform. Not many functions in real-world have nice explicit FT...

## Thanks Robert! You mean four...

Thanks Robert! You mean four alternative methods that will lead to the same result, or else? I am not sure if I have to gather all four items you've listed...

## Maple 10.06, IBM INTEL NT,...

Maple 10.06, IBM INTEL NT, Nov 10 2006 Build ID

## It's the one in Matlab...

It's the one in Matlab 2007a. How do I tell which version of Maple it uses? Thanks!

## Thanks! I don't have...

Thanks! I don't have CodeGeneration for this version. I only have codegen...

## How do I do the...

How do I do the following? Here I just give a simple example. In my actual project, I have very complicated "aa" and "bb" involving the x[1, k] and x[2, k]. Because it is very complicated, I don't want to do a substitution by using the "subs" myself. I hope it can do the automatic substitution for me. But it didn't. What's the problem? aa:=x[1, k] bb:=x[2, k] cc:=proc(x, m) S:=Array([0, 0]); for k from 1 to m do S[1]:=S[1]+aa; S[2]:=S[2]+(bb)^2; od; S end proc dd:=prep2trans(cc) ee:=C(dd, optimized) ----------- As you can see from the following outputs: the "aa" and "bb" didn't get substituted by expressions indexed by k. Outputs: dd := proc (x, m) local S, k; S := Array([0, 0]); for k to m do S[1] := S[1]+aa; S[2] := S[2]+bb^2 end do; S end proc void dd(x,m,S)double x;double m;double S[2];{ int k; { S[0] = 0.0; S[1] = 0.0; for(k = 1;k <= m;k++) { S[0] += aa; S[1] += bb*bb; } return; }}ee := NULL

## How to provide a bound? ...

How to provide a bound?

## I just couldn't get the 2D...

I just couldn't get the 2D matrix to work in Maple. I am accessing Maple from Matlab. I believe it has a Maple 10 engine... Here are my codes: aa:=proc(x, m) S:=Array([0, 0]); for k from 1 to m do S[1]:=S[1]+x[0][k]; S[2]:=S[2]+(x[1][k])^2; od; S end proc bb:=prep2trans(aa) cc:=C(bb, optimized) Warning: Warning, `S` is implicitly declared local to procedure `aa` Warning: Warning, `k` is implicitly declared local to procedure `aa` aa := proc (x, m) local S, k; S := Array([0, 0]); for k to m do S[1] := S[1]+x[0][k]; S[2] := S[2]+x[1][k]^2 end do; S end proc ans = bb := proc (x, m) local S, k; S := Array([0, 0]); for k to m do S[1] := S[1]+x[0][k]; S[2] := S[2]+x[1][k]^2 end do; S end proc ??? Error using ==> maple at 129 Error, (in codegen/C) Unable to translate this subscript into C x[0][k]

## Thanks a lot Robert! That...

Thanks a lot Robert! That really helps! I was hoping to do that programmatically for various combinations of a, b, c... Do you think the above procedures can be automated? Thanks again for your help!

## Hi JacquesC, Thanks a lot...

Hi JacquesC, Thanks a lot and these are extremely good thoughts. You said "From the Taylor series, you can then find what looks like the closest singularity. " What reference point do you use when you say "closest"? Let's say the goal is to find the singularity that is the closest to zero, y'(t, w)=a+b*y(t, w)+c*(y(t, w))^2, and F(t, w)=exp(y(t, w)), (the partial derivative is with respect to "t". And after the ODE is solved for a particular "t", then we treat that "t" as fixed.) And here w is complex-valued variable and we want to integrate with respect to "w" by selecting a suitable contour on the complex plane. To do that, we want to find out how far are the singularities away from zero. Then we will be able to decide a contour in between zero and this singularity point. We will have to select the contour carefully. But without the closed-form expression of F(t, w), we are like blind people walking in the dark. How to use Maple to handle this problem? Any more thoughts? Thanks!

## okay, I will try! You never...

okay, I will try! You never got into these problems? I thought everybody should have similar problems...

## I see, basically I have to...

I see, basically I have to do it myself... That's bad. Still, there is one thing that's good for my purpose -- at least it will generate its gradient for me by auto-differentiation... that's actually why I want to make a loop in Maple and translate it into C...

## More observations and...

More observations and questions: 1. Maple is a memory eater. How much memory does it support maximally? Say, I currently have 1.5GB, and am using XP, will upgrade to 2GB help? or 3GB? 2. Simply closing and relaunching Maple doesn't help. You really need to have a reboot before doing heavy duty work.

## If anyway at the end of the...

If anyway at the end of the day I will have my code translated into C, do I care about using 1/2 or 0.5? If there is error in using 0.5 in Maple, ultimately when I compute everything in C, the same error will occur. I've found that using 1/2 tends to make Maple computation slower... is this a grounded observation?

## Sorry, the memory is...

Sorry, the memory is 1.5GB... not 2GB...
 1 2 Page 1 of 2
﻿