| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Use conda compiler toolchain for conda builds
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For both libastra and astra-toolbox:
1) We do not use script_env to set CC/CXX anymore, since the compilers
are installed by conda.
2) The build string is made useful by including either the python+numpy
version or the cudatoolkit version that the package was built with.
3) Some clean-up of build.sh in buildenv/
For libastra:
1) The libastra.so is built with the conda C/C++ compiler
toolchain. This has two benefits:
1) The rpath of libastra.so is set to $ORIGIN, which makes linking
easier for dependent packages.
2) libastra.so is linkable against ancient versions of glibc. With
old versions of memcpy.
2) The C/C++ compiler version is fixed to 5.4.0
3) In libastra/build.sh, we rename $CONDA_PREFIX to
$PREFIX. Apparently, this is how it is supposed to be done. For me,
$CONDA_PREFIX was suddenly undefined. Why this was not a problem
before, is unclear to me.
4) The cudatoolkit runtime dependency is pinned with pin_compatible
5) The libastra conda package now provides headers and .pc file. This
is useful for building C++ packages that depend on astra.
6) Remove some old code related to cudatoolkit<8.0.
For astra-toolbox:
1) astra-toolbox uses the conda-provided compilers
2) The compilers are fixed to version 7.3
3) Add boost to host requirements of astra-toolbox
Notes on testing:
- The libastra build has been tested with all versions of cudatoolkit
- The astra-toolbox build has been tested with all provided versions
of python after building a single cudatoolkit version of libastra.
How to test this branch:
- It should work by just editing
`python/conda/linux_release/buildenv/build.sh`. Set
BRANCH=CI-use-conda-c-compiler-toolchain
URL=https://github.com/ahendriksen/astra-toolbox
and run release.sh from the `python/conda/linux_release` directory.
|
| |
|
| |
|
|\
| |
| | |
Use recent version of conda during linux conda-build
|
|/
|
|
|
|
| |
This fixes the issue where cudatoolkit=8.0 would not install any more
with recent versions of conda. Also, no corruption appears to take
place while downloading packages.
|
| |
|
|
|
|
|
|
|
|
|
| |
Since we use mex for linking, but CXX for compiling, we also need to
set a preprocessor macro to emulate the -R2017b option. Currently we
use -DMATLAB_MEXCMD_RELEASE=700, but it is unclear if this is the
recommended way.
This is required to build with Matlab R2018a and newer.
|
|
|
|
|
|
|
|
| |
The abort handling is currently only used to process Ctrl-C from Matlab.
Since Matlab R2019a, it appears that calling utIsInterruptPending() from
a thread other than the main thread will crash. The previous approach of
checking utIsInterruptPending() in a thread, and then signalling the running
algorithm was therefore broken.
|
|
|
|
|
|
|
| |
Debian 7 is EOL, and CUDA 10.1 doesn't support its version of glibc.
Hardcoded conda=4.6.14 for now, since 4.7.5 seems to be downloading
corrupted packages when running in docker/linux-64.
|
| |
|
|
|
|
| |
The previous one was an undocumented educated guess.
|
|
|
|
|
|
| |
A recent setuptools is using the full path as part of the name of the
temporary build directory, which made the full temp path too long
when called from conda-build in Windows.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The strip model for a fan beam geometry wasn't taking pixel magnification into
account. Among other things, this resulted in diagonals through rectangles
being weighted the same as hor/ver lines.
This commit fixes this by scaling each pixel contribution by its magnification
on the detector. This is only an approximation (since the magnification isn't
constant inside the pixel), but since pixels are usually small, the error
is also small. Unfortunately, computing this scaling factor is relatively
expensive because it introduces a square root in the inner loop.
|
| |
|
| |
|
|
|
|
|
|
| |
There are still some remaining fan2d_strip unit test failures, with
suspicious slightly too large numerical deviations around 45 degree
projections.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
In the worst case this would lead to (nearly) empty storage for getMatrix(),
resulting in (near) explicit projection matrices. (These are only used for
exporting explicit sparse projection matrices to matlab/python; not for
FP/BP/reconstruction.)
This is a quick fix; ideally the affected code would use dynamic storage.
|
| |
|
|\
| |
| | |
Add basic implementation of par2d CPU Distance Driven projector
|
| | |
|
| | |
|
| |
| |
| |
| | |
This includes the astra_mex_projector('splat') matlab function.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
This adds a matlab geometry visualizer, and a sample showing how to use it.
|
| |
| |
| |
| | |
This also replaces modifying the path with a request to the user to modify the path.
|
| |
| |
| |
| | |
Signed-off-by: Tim <tim.elberfeld@uantwerpen.be>
|
| | |
|
| | |
|
| |
| |
| |
| | |
Thanks to @ahendriksen.
|
| | |
|
|\ \
| |/
|/| |
Read filter config for FBP from cfg.options
|
| |
| |
| |
| |
| | |
The FilterSinogramId, FilterParameter and FilterD options now only get
marked used if they are actually used, based on the value of FilterType.
|
|/
|
|
|
|
|
| |
Since these settings are optional, they should have been in cfg.options
instead of directly in cfg. The old syntax remains a fallback.
This has the side-effect that the tomopy/astra interface can also supply them.
|
| |
|
| |
|
| |
|
| |
|