pagan

5147 Reputation

23 Badges

17 years, 122 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are replies submitted by pagan

It often amazes me that people will create a procedure p, but only so that it ever gets used as a particular function call like p(x). This is often a clue that the procedure simply wasn't necessary.

It often amazes me that people will create a procedure p, but only so that it ever gets used as a particular function call like p(x). This is often a clue that the procedure simply wasn't necessary.

If it helps you any

> simplify(arctan(sin(eps),cos(eps))) assuming eps>=-Pi/2, eps<=Pi/2;
                              eps

> simplify(convert( arctan(sin(eps)/cos(eps)), tan)) assuming eps>=-Pi/2, eps<=Pi/2;
                              eps

If it helps you any

> simplify(arctan(sin(eps),cos(eps))) assuming eps>=-Pi/2, eps<=Pi/2;
                              eps

> simplify(convert( arctan(sin(eps)/cos(eps)), tan)) assuming eps>=-Pi/2, eps<=Pi/2;
                              eps

@PatrickT Sorry, I misunderstood entirely. I misread and thought that you were talking about the first argument (which doesn't allow for lists).

@PatrickT Sorry, I misunderstood entirely. I misread and thought that you were talking about the first argument (which doesn't allow for lists).

Sorry if you knew this already, but you might also produce `expand/&.` that could come in useful.

It might be based upon the existing `expand/&*`, for example.

Alternatively, you could construct a routine `value/&*` instead of `value/&.` and get the benefit of the existing expand (and any other?!) functionality for &*.

If I may paraphrase: the problem is in using `evalm` to get the value of &* function calls. I agree that using `evalm` is not the right way to go about that, for capital "M" Matrices. But other than that, I don't see a problem with using &* as the neutral noncommutative multiplication operator. Extending value, as you suggest, if likely much better than using `evalm`.

Sorry if you knew this already, but you might also produce `expand/&.` that could come in useful.

It might be based upon the existing `expand/&*`, for example.

Alternatively, you could construct a routine `value/&*` instead of `value/&.` and get the benefit of the existing expand (and any other?!) functionality for &*.

If I may paraphrase: the problem is in using `evalm` to get the value of &* function calls. I agree that using `evalm` is not the right way to go about that, for capital "M" Matrices. But other than that, I don't see a problem with using &* as the neutral noncommutative multiplication operator. Extending value, as you suggest, if likely much better than using `evalm`.

Yes, I tested between Sparc and x86. For floats (your float example above, actually).

For anyone interested, pls see here and here.

And the issue pertains to integer[4] width as well. Suppose I write a 4-byte integer like 3^14=4782969 to a file, in binary mode using fwrite4 analogous to the above routines, on a Sparc+32bit Solaris or a G5. And if I then I use the equivalent fread4 command on x86+Linux then I will get the incorrect result 2046511104. But if I swap the bytes around, write them out 1 byte at a time using fwrite1, and then re-read the new file using fread4 then I will end up with the desired 4782969.

That is to say, on big-endian Sparc:

> fread4:=define_external('fread',
> 'ptr'::REF(ARRAY(datatype=integer[4])),
> 'size'::integer[4],
> 'count'::integer[4],
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.1"):
> fopen1:=define_external('fopen',
> 'filename'::string,
> 'mode'::string,
> 'RETURN'::integer[4],
> 'LIB'="libc.so.1"):
> fclose1:=define_external('fclose',
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.1"):
> fwrite4:=define_external('fwrite',
> 'ptr'::REF(ARRAY(datatype=integer[4])),
> 'size'::integer[4],
> 'count'::integer[4],
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.1"):
>
> A:=Array(1..3,[3^14],datatype=integer[4]);
A := [4782969, 0, 0]

>
> p:=fopen1("lla","wb"):
> fwrite4(A,4,3,p):
> fclose1(p):
> B:=Array(1..4,datatype=integer[4]):
> p:=fopen1("lla","rb"):
> fread4(B,4,3,p):
> fclose1(p):
> B;
[4782969, 0, 0, 0]

And afterwards on little-endian x86:

> fread4:=define_external('fread',
> 'ptr'::REF(ARRAY(datatype=integer[4])),
> 'size'::integer[4],
> 'count'::integer[4],
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.6"):
> fread1:=define_external('fread',
> 'ptr'::REF(ARRAY(datatype=integer[1])),
> 'size'::integer[4],
> 'count'::integer[4],
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.6"):
> fopen1:=define_external('fopen',
> 'filename'::string,
> 'mode'::string,
> 'RETURN'::integer[4],
> 'LIB'="libc.so.6"):
> fclose1:=define_external('fclose',
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.6"):
> fwrite1:=define_external('fwrite',
> 'ptr'::REF(ARRAY(datatype=integer[1])),
> 'size'::integer[4],
> 'count'::integer[4],
> 'stream'::integer[4],
> 'RETURN'::integer[4],
> 'LIB'="libc.so.6"):
>
> B:=Array(1..1,datatype=integer[4]):
> p:=fopen1("lla","rb"):
> fread4(B,4,1,p):
> fclose1(p):
> B;
[2046511104]

>
> B1:=Array(1..4,datatype=integer[1]):
> p:=fopen1("lla","rb"):
> fread1(B1,1,4,p):
> fclose1(p):
> B1;
[0, 72, -5, 121]

>
> B1[1],B1[2],B1[3],B1[4]:=B1[4],B1[3],B1[2],B1[1]:
> p:=fopen1("llnew","wb"):
> fwrite1(B1,1,4,p):
> fclose1(p):
>
> C:=Array(1..1,datatype=integer[4]):
> p:=fopen1("llnew","rb"):
> fread4(C,4,1,p):
> fclose1(p):
> C;
[4782969]

As was mentioned, the byte-swapping can be done quickly at the C level.

The code posted for dealing with float[8] Arrays produces binary files which are not interchangeable between big and little endian machines. That should be expected. Alec's code simply does what is asked. No fault his, if someone else uses it unreasonably.

Presumably the ImageTools:-Read and the ImportMatrix:-ReadBinaryFile routines take endianness into account (or it is taken into account by readbytes which they may call) simply by adhering to the headers of the relevent file formats (eg, .tiff, or binary .mat).

Anyone who wishes to use headerless, raw float[8] binary format files should take the responsibility for the endianness issue if interchanging between {x86, x86-64} and {SPARC, PowerPC} for example.

The plots appear broken once more.

@Joe Riel Thanks for mentioning that, Joe. (And, of course, I had used the round-bracket delimiters in my posted example, but failed to mention them.)

I find that it is not common for me to need 1 pair of round-brackets due to this non-associativity, and much rarer still that I ever need more than a single pair. So I am content with typing the two extra characters, as opposed to updating my own Maple customizations. But of course that is a personal choice.

I can see that modules help control and manage the (esp. global) namespace. But I don't see why the usual :- colon-dash form cannot be made to simply work by default for passing module locals to `showstat`, `stopat`, and `trace`.

I can't really envision how those debugging utilities could be used in a way that would affect normal runtime, so isn't it safe to allow those debugging utilities to just "see" the modules transparently?

How many unknowns do you really have? Did you assign numeric values to r1, r2, and r4 earlier on, and forget to mention that here? (I suspect that this is the case, since if not then fsolve should instead have issued an error about their being in the equations while not "being solved for".)

What is the function `sen`? Did you make a typo, and intend `sin` instead?

@Alejandro Jakubi In case anyone is inclined to judge Alejandro's response as being too much like "cheating", let me preempt by saying that it is far more subtle. The natural extension is to ask how Maple can automatically provide such applyrule actions using the large variety of patterned identities found in the database of knowledge used by the FunctionAdvisor.

Let me put that another way. It's really nice that Maple now has convert/phaseamp, say. And maybe that deserves it's own conversion routine. But there shouldn't be such a specially named conversion for each and every identity that Maple has stored (somewhere). Instead, there should be a central programmatic mechanism for extracting and applying such patterning rules.

@Alejandro Jakubi In case anyone is inclined to judge Alejandro's response as being too much like "cheating", let me preempt by saying that it is far more subtle. The natural extension is to ask how Maple can automatically provide such applyrule actions using the large variety of patterned identities found in the database of knowledge used by the FunctionAdvisor.

Let me put that another way. It's really nice that Maple now has convert/phaseamp, say. And maybe that deserves it's own conversion routine. But there shouldn't be such a specially named conversion for each and every identity that Maple has stored (somewhere). Instead, there should be a central programmatic mechanism for extracting and applying such patterning rules.

First 34 35 36 37 38 39 40 Last Page 36 of 81