Compare commits

..

47 Commits

Author SHA1 Message Date
Riccardo Spagni
29a505d1c1 Merge pull request #5638
6ed1679b prep for 0.14.1 release (Riccardo Spagni)
2019-06-14 16:09:16 +02:00
Riccardo Spagni
c58255ec12 Merge pull request #5640
542cab02 rpc: restrict the recent cutoff size in restricted RPC mode (moneromooo-monero)
434e617a ensure no NULL is passed to memcpy (moneromooo-monero)
279f1f2c abstract_tcp_server2: improve DoS resistance (moneromooo-monero)
756773e5 serialization: check stream good flag at the end (moneromooo-monero)
e3f714aa tree-hash: allocate variable memory on heap, not stack (moneromooo-monero)
67baa3a6 cryptonote: throw on tx hash calculation error (moneromooo-monero)
d6bb9ecc serialization: fail on read_varint error (moneromooo-monero)
19490e44 cryptonote_protocol: fix another potential P2P DoS (moneromooo-monero)
fa4aa47e cryptonote_protocol: expand basic DoS protection (moneromooo-monero)
3c953d53 cryptonote_protocol_handler: prevent potential DoS (anonimal)
b873b69d epee: basic sanity check on allocation size from untrusted source (moneromooo-monero)
2019-06-14 16:08:59 +02:00
moneromooo-monero
542cab02e1 rpc: restrict the recent cutoff size in restricted RPC mode 2019-06-14 08:48:27 +00:00
moneromooo-monero
434e617a1d ensure no NULL is passed to memcpy
NULL is valid when size is 0, but memcpy uses nonnull attributes,
so let's not poke the bear
2019-06-14 08:48:25 +00:00
moneromooo-monero
279f1f2c26 abstract_tcp_server2: improve DoS resistance 2019-06-14 08:48:22 +00:00
moneromooo-monero
756773e5fe serialization: check stream good flag at the end
just in case
2019-06-14 08:48:19 +00:00
moneromooo-monero
e3f714aa2a tree-hash: allocate variable memory on heap, not stack
Large amounts might run out of stack

Reported by guidov
2019-06-14 08:48:16 +00:00
moneromooo-monero
67baa3a66b cryptonote: throw on tx hash calculation error 2019-06-14 08:48:13 +00:00
moneromooo-monero
d6bb9ecc6f serialization: fail on read_varint error 2019-06-14 08:48:10 +00:00
moneromooo-monero
19490e44af cryptonote_protocol: fix another potential P2P DoS
When asking for txes in a fluffy transaction, one might ask
for the same (large) tx many times
2019-06-14 08:48:07 +00:00
moneromooo-monero
fa4aa47ea0 cryptonote_protocol: expand basic DoS protection
Count transactions as well
2019-06-14 08:48:04 +00:00
anonimal
3c953d5369 cryptonote_protocol_handler: prevent potential DoS
Essentially, one can send such a large amount of IDs that core exhausts
all free memory. This issue can theoretically be exploited using very
large CN blockchains, such as Monero.

This is a partial fix. Thanks and credit given to CryptoNote author
'cryptozoidberg' for collaboration and the fix. Also thanks to
'moneromooo'. Referencing HackerOne report #506595.
2019-06-14 08:48:01 +00:00
moneromooo-monero
b873b69ded epee: basic sanity check on allocation size from untrusted source
Reported by guidov
2019-06-14 08:47:58 +00:00
Riccardo Spagni
6ed1679bf8 prep for 0.14.1 release 2019-06-13 17:36:41 +02:00
luigi1111
3395de2e7f Merge pull request #5633
cfa88ac Don't use -march=native (hyc)
e85bf46 Allow parallel make (hyc)
0ef8391 Delete redundant cppzmq dependency (hyc)
86591eb Use 9 digit build IDs (hyc)
2019-06-12 14:50:01 -05:00
luigi1111
9f2882dbb7 Merge pull request #5631
c27d961 [depends] update openssl to 1.0.2r (who-biz)
2019-06-12 14:45:40 -05:00
Howard Chu
86591ebf64 Use 9 digit build IDs 2019-06-12 16:15:07 +01:00
Howard Chu
0ef8391628 Delete redundant cppzmq dependency 2019-06-12 10:21:19 +01:00
Howard Chu
e85bf46641 Allow parallel make 2019-06-12 09:10:37 +01:00
Howard Chu
cfa88acb2b Don't use -march=native 2019-06-12 09:10:29 +01:00
luigi1111
538fae4ec2 Merge pull request #5614
4cff925 p2p: fix GCC 9.1 crash (monermooo-monero)
f47488c Fix GCC 9.1 build warnings (moneromooo-monero)
ce13a98 cmake: do not use -mmitigate-rop on GCC >= 9.1 (moneromooo-monero)
2019-06-11 17:22:11 -05:00
luigi1111
0c62e7b15f Merge pull request #5622
b0a04f7 epee: fix SSL autodetect on reconnection (xiphon)
2019-06-11 17:05:17 -05:00
luigi1111
24806b5035 Merge pull request #5620
117f950 miner: fix double free of thread attributes (ston1th)
2019-06-11 17:02:51 -05:00
luigi1111
0a1731aa7c Merge pull request #5617
6375111 miniupnpc: update to build on BSD (moneromooo-monero)
2019-06-11 17:01:14 -05:00
ston1th
117f9501d8 miner: fix double free of thread attributes
issue: #5568
2019-06-09 12:29:03 +02:00
moneromooo-monero
ce13a98239 cmake: do not use -mmitigate-rop on GCC >= 9.1
It was removed, but it still accepted by the compiler, which warns
for every file
2019-06-09 09:40:55 +00:00
moneromooo-monero
f47488c734 Fix GCC 9.1 build warnings
GCC wants operator= aand copy ctor to be both defined, or neither
2019-06-09 09:39:34 +00:00
moneromooo-monero
6375111fa4 miniupnpc: update to build on BSD 2019-06-08 18:37:42 +00:00
moneromooo-monero
4cff9257e0 p2p: fix GCC 9.1 crash 2019-06-08 17:53:13 +00:00
Riccardo Spagni
256f8d8b66 Merge pull request #5584
eeebad66 functional_tests: fix python3 compatibility (moneromooo-monero)
2019-06-01 20:32:39 +02:00
Riccardo Spagni
9a2883266c Merge pull request #5578
b7a96a08 core: update pruning if using --prune-blockchain on a pruned blockchain (moneromooo-monero)
2019-06-01 20:32:18 +02:00
Riccardo Spagni
3f2c82326e Merge pull request #5572
a663ccba blockchain: do not try to pop blocks down to the genesis block (moneromooo-monero)
8f2a99d8 core: do not commit half constructed batch db txn (moneromooo-monero)
2019-06-01 20:31:55 +02:00
Riccardo Spagni
8b9920f0af Merge pull request #5551
de6cfacc refresh and update translations for new release (erciccione)
2019-06-01 20:31:32 +02:00
Riccardo Spagni
162229286f Merge pull request #5565
4456a4b9 Fix allow any cert mode in wallet rpc when configured over rpc (Lee Clagett)
fafc5c36 Add ssl_options support to monerod's rpc mode. (Lee Clagett)
ce73cc3a Fix configuration bug; wallet2 --daemon-ssl-allow-any-cert now works. (Lee Clagett)
2019-06-01 20:31:13 +02:00
Riccardo Spagni
2f5efc7f59 Merge pull request #5562
13864702 functional_tests: fix rare get_output_distribution failure (moneromooo-monero)
2019-06-01 20:30:54 +02:00
Riccardo Spagni
0565fe21ec Merge pull request #5563
205a0ba1 unit_tests: make the density test a bit less stringent (moneromooo-monero)
2019-06-01 20:30:38 +02:00
Riccardo Spagni
ff5b30864a Merge pull request #5564
b6830db2 Fix #5553 (Howard Chu)
2019-06-01 20:30:01 +02:00
moneromooo-monero
eeebad6630 functional_tests: fix python3 compatibility
Also add missing bans test to the default tests
2019-05-29 12:00:33 +00:00
moneromooo-monero
b7a96a0874 core: update pruning if using --prune-blockchain on a pruned blockchain
Avoids a massive amount of spurious warnings if the last update before
the daemon exited was a while ago and the daemon was syncing
2019-05-28 09:12:29 +00:00
moneromooo-monero
a663ccba71 blockchain: do not try to pop blocks down to the genesis block 2019-05-26 17:11:37 +00:00
moneromooo-monero
8f2a99d8ab core: do not commit half constructed batch db txn 2019-05-25 16:25:10 +00:00
moneromooo-monero
13864702f1 functional_tests: fix rare get_output_distribution failure
When the wallet auto refreshes after mining the last two blocks
but before popping them, it will then try to use outputs which
are not unlocked yet. This is really a wallet problem, which
will be fixed later.
2019-05-22 11:24:08 +00:00
moneromooo-monero
205a0ba101 unit_tests: make the density test a bit less stringent
It's an inherently random test
2019-05-22 11:23:20 +00:00
Lee Clagett
4456a4b9b3 Fix allow any cert mode in wallet rpc when configured over rpc 2019-05-21 16:17:50 +00:00
Lee Clagett
fafc5c3692 Add ssl_options support to monerod's rpc mode. 2019-05-21 16:17:34 +00:00
Lee Clagett
ce73cc3add Fix configuration bug; wallet2 --daemon-ssl-allow-any-cert now works. 2019-05-21 16:17:13 +00:00
erciccione
de6cfacc02 refresh and update translations for new release 2019-05-17 14:13:10 +02:00
393 changed files with 15402 additions and 284807 deletions

1
.github/FUNDING.yml vendored
View File

@@ -1 +0,0 @@
custom: https://web.getmonero.org/get-started/contributing/

5
.gitignore vendored
View File

@@ -96,9 +96,6 @@ local.properties
# PDT-specific
.buildpath
# Netbeans-specific
nbproject
# sbteclipse plugin
.target
@@ -106,4 +103,4 @@ nbproject
.texlipse
.idea/
/testnet
/testnet

3
.gitmodules vendored
View File

@@ -12,6 +12,3 @@
[submodule "external/trezor-common"]
path = external/trezor-common
url = https://github.com/trezor/trezor-common.git
[submodule "external/randomx"]
path = external/randomx
url = https://github.com/tevador/RandomX

View File

@@ -18,8 +18,6 @@ env:
- SDK_URL=https://bitcoincore.org/depends-sources/sdks
- DOCKER_PACKAGES="build-essential libtool cmake autotools-dev automake pkg-config bsdmainutils curl git ca-certificates ccache"
matrix:
# RISCV 64bit
- HOST=riscv64-linux-gnu PACKAGES="python3 gperf g++-riscv64-linux-gnu"
# ARM v7
- HOST=arm-linux-gnueabihf PACKAGES="python3 gperf g++-arm-linux-gnueabihf"
# ARM v8
@@ -51,7 +49,7 @@ before_script:
- if [ -n "$OSX_SDK" -a -f contrib/depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz ]; then tar -C contrib/depends/SDKs -xf contrib/depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz; fi
- if [[ $HOST = *-mingw32 ]]; then $DOCKER_EXEC bash -c "update-alternatives --set $HOST-g++ \$(which $HOST-g++-posix)"; fi
- if [[ $HOST = *-mingw32 ]]; then $DOCKER_EXEC bash -c "update-alternatives --set $HOST-gcc \$(which $HOST-gcc-posix)"; fi
- if [ -z "$NO_DEPENDS" ]; then $DOCKER_EXEC bash -c "make $MAKEJOBS -C contrib/depends HOST=$HOST $DEP_OPTS"; fi
- if [ -z "$NO_DEPENDS" ]; then $DOCKER_EXEC bash -c "CONFIG_SHELL= make $MAKEJOBS -C contrib/depends HOST=$HOST $DEP_OPTS"; fi
script:
- git submodule init && git submodule update
- export TRAVIS_COMMIT_LOG=`git log --format=fuller -1`

View File

@@ -43,11 +43,11 @@ additional peers can be found through typical p2p peerlist sharing.
### Outbound Connections
Connecting to an anonymous address requires the command line option
`--tx-proxy` which tells `monerod` the ip/port of a socks proxy provided by a
`--proxy` which tells `monerod` the ip/port of a socks proxy provided by a
separate process. On most systems the configuration will look like:
> `--tx-proxy tor,127.0.0.1:9050,10`
> `--tx-proxy i2p,127.0.0.1:9000`
> `--proxy tor,127.0.0.1:9050,10`
> `--proxy i2p,127.0.0.1:9000`
which tells `monerod` that ".onion" p2p addresses can be forwarded to a socks
proxy at IP 127.0.0.1 port 9050 with a max of 10 outgoing connections and
@@ -114,7 +114,7 @@ encryption.
Options `--add-exclusive-node` and `--add-peer` recognize ".onion" and
".b32.i2p" addresses, and will properly forward those addresses to the proxy
provided with `--tx-proxy tor,...` or `--tx-proxy i2p,...`.
provided with `--proxy tor,...` or `--proxy i2p,...`.
Option `--anonymous-inbound` also recognizes ".onion" and ".b32.i2p" addresses,
and will automatically be sent out to outgoing Tor/I2P connections so the peer
@@ -160,6 +160,25 @@ the system clock is noticeably off (and therefore more fingerprintable),
linking the public IPv4/IPv6 connections with the anonymity networks will be
more difficult.
### Bandwidth Usage
An ISP can passively monitor `monerod` connections from a node and observe when
a transaction is sent over a Tor/I2P connection via timing analysis + size of
data sent during that timeframe. I2P should provide better protection against
this attack - its connections are not circuit based. However, if a node is
only using I2P for broadcasting Monero transactions, the total aggregate of
I2P data would also leak information.
#### Mitigation
There is no current mitigation for the user right now. This attack is fairly
sophisticated, and likely requires support from the internet host of a Monero
user.
In the near future, "whitening" the amount of data sent over anonymity network
connections will be performed. An attempt will be made to make a transaction
broadcast indistinguishable from a peer timed sync command.
### Intermittent Monero Syncing
If a user only runs `monerod` to send a transaction then quit, this can also
@@ -189,36 +208,3 @@ is a tradeoff in potential isses. Also, anyone attempting this strategy really
wants to uncover a user, it seems unlikely that this would be performed against
every Tor/I2P user.
### I2P/Tor Stream Used Twice
If a single I2P/Tor stream is used 2+ times for transmitting a transaction, the
operator of the hidden service can conclude that both transactions came from the
same source. If the subsequent transactions spend a change output from the
earlier transactions, this will also reveal the "real" spend in the ring
signature. This issue was (primarily) raised by @secparam on Twitter.
#### Mitigation
`monerod` currently selects two outgoing connections every 5 minutes for
transmitting transactions over I2P/Tor. Using outgoing connections prevents an
adversary from making many incoming connections to obtain information (this
technique was taken from Dandelion). Outgoing connections also do not have a
persistent public key identity - the creation of a new circuit will generate
a new public key identity. The lock time on a change address is ~20 minutes, so
`monerod` will have rotated its selected outgoing connections several times in
most cases. However, the number of outgoing connections is typically a small
fixed number, so there is a decent probability of re-use with the same public
key identity.
@secparam (twitter) recommended changing circuits (Tor) as an additional
precaution. This is likely not a good idea - forcibly requesting Tor to change
circuits is observable by the ISP. Instead, `monerod` should likely disconnect
from peers ocassionally. Tor will rotate circuits every ~10 minutes, so
establishing new connections will use a new public key identity and make it
more difficult for the hidden service to link information. This process will
have to be done carefully because closing/reconnecting connections can also
leak information to hidden services if done improperly.
At the current time, if users need to frequently make transactions, I2P/Tor
will improve privacy from ISPs and other common adversaries, but still have
some metadata leakages to unknown hidden service operators.

View File

@@ -27,9 +27,6 @@
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
list(INSERT CMAKE_MODULE_PATH 0
"${CMAKE_SOURCE_DIR}/cmake")
include(CheckCCompilerFlag)
@@ -37,7 +34,6 @@ include(CheckCXXCompilerFlag)
include(CheckLinkerFlag)
include(CheckLibraryExists)
include(CheckFunctionExists)
include(FindPythonInterp)
if (IOS)
INCLUDE(CmakeLists_IOS.txt)
@@ -118,7 +114,6 @@ string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
# when ARCH is not set to an explicit identifier, cmake's builtin is used
# to identify the target architecture, to direct logic in this cmake script.
# Since ARCH is a cached variable, it will not be set on first cmake invocation.
if (NOT ARCH_ID)
if (NOT ARCH OR ARCH STREQUAL "" OR ARCH STREQUAL "native" OR ARCH STREQUAL "default")
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "")
set(CMAKE_SYSTEM_PROCESSOR ${CMAKE_HOST_SYSTEM_PROCESSOR})
@@ -127,7 +122,6 @@ if (NOT ARCH OR ARCH STREQUAL "" OR ARCH STREQUAL "native" OR ARCH STREQUAL "def
else()
set(ARCH_ID "${ARCH}")
endif()
endif()
string(TOLOWER "${ARCH_ID}" ARM_ID)
string(SUBSTRING "${ARM_ID}" 0 3 ARM_TEST)
if (ARM_TEST STREQUAL "arm")
@@ -193,7 +187,7 @@ if(NOT MANUAL_SUBMODULES)
if (upToDate)
message(STATUS "Submodule '${relative_path}' is up-to-date")
else()
message(FATAL_ERROR "Submodule '${relative_path}' is not up-to-date. Please update all submodules with\ngit submodule update --init --force\nor run cmake with -DMANUAL_SUBMODULES=1\n")
message(FATAL_ERROR "Submodule '${relative_path}' is not up-to-date. Please update with\ngit submodule update --init --force ${relative_path}\nor run cmake with -DMANUAL_SUBMODULES=1")
endif()
endfunction ()
@@ -202,7 +196,6 @@ if(NOT MANUAL_SUBMODULES)
check_submodule(external/unbound)
check_submodule(external/rapidjson)
check_submodule(external/trezor-common)
check_submodule(external/randomx)
endif()
endif()
@@ -253,12 +246,6 @@ enable_testing()
option(BUILD_DOCUMENTATION "Build the Doxygen documentation." ON)
option(BUILD_TESTS "Build tests." OFF)
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(DEFAULT_BUILD_DEBUG_UTILITIES ON)
else()
set(DEFAULT_BUILD_DEBUG_UTILITIES OFF)
endif()
option(BUILD_DEBUG_UTILITIES "Build debug utilities." DEFAULT_BUILD_DEBUG_UTILITIES)
# Check whether we're on a 32-bit or 64-bit system
if(CMAKE_SIZEOF_VOID_P EQUAL "8")
@@ -363,9 +350,54 @@ endif()
# memory was the default in Cryptonote before Monero implemented LMDB, it still works but is unnecessary.
# set(DATABASE memory)
set(DATABASE lmdb)
message(STATUS "Using LMDB as default DB type")
set(BLOCKCHAIN_DB DB_LMDB)
add_definitions("-DDEFAULT_DB_TYPE=\"lmdb\"")
if (DEFINED ENV{DATABASE})
set(DATABASE $ENV{DATABASE})
message(STATUS "DATABASE set: ${DATABASE}")
else()
message(STATUS "Could not find DATABASE in env (not required unless you want to change database type from default: ${DATABASE})")
endif()
set(BERKELEY_DB_OVERRIDE 0)
if (DEFINED ENV{BERKELEY_DB})
set(BERKELEY_DB_OVERRIDE 1)
set(BERKELEY_DB $ENV{BERKELEY_DB})
elseif()
set(BERKELEY_DB 0)
endif()
if (DATABASE STREQUAL "lmdb")
message(STATUS "Using LMDB as default DB type")
set(BLOCKCHAIN_DB DB_LMDB)
add_definitions("-DDEFAULT_DB_TYPE=\"lmdb\"")
elseif (DATABASE STREQUAL "berkeleydb")
find_package(BerkeleyDB)
if(NOT BERKELEY_DB)
die("Found BerkeleyDB includes, but could not find BerkeleyDB library. Please make sure you have installed libdb and libdb-dev / libdb++-dev or the equivalent.")
else()
message(STATUS "Found BerkeleyDB include (db.h) in ${BERKELEY_DB_INCLUDE_DIR}")
if(BERKELEY_DB_LIBRARIES)
message(STATUS "Found BerkeleyDB shared library")
set(BDB_STATIC false CACHE BOOL "BDB Static flag")
set(BDB_INCLUDE ${BERKELEY_DB_INCLUDE_DIR} CACHE STRING "BDB include path")
set(BDB_LIBRARY ${BERKELEY_DB_LIBRARIES} CACHE STRING "BDB library name")
set(BDB_LIBRARY_DIRS "" CACHE STRING "BDB Library dirs")
set(BERKELEY_DB 1)
else()
die("Found BerkeleyDB includes, but could not find BerkeleyDB library. Please make sure you have installed libdb and libdb-dev / libdb++-dev or the equivalent.")
endif()
endif()
message(STATUS "Using Berkeley DB as default DB type")
add_definitions("-DDEFAULT_DB_TYPE=\"berkeley\"")
else()
die("Invalid database type: ${DATABASE}")
endif()
if(BERKELEY_DB)
add_definitions("-DBERKELEY_DB")
endif()
add_definitions("-DBLOCKCHAIN_DB=${BLOCKCHAIN_DB}")
# Can't install hook in static build on OSX, because OSX linker does not support --wrap
@@ -417,7 +449,7 @@ if (CMAKE_SYSTEM_NAME MATCHES "(SunOS|Solaris)")
endif ()
if (APPLE AND NOT IOS)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=x86-64 -fvisibility=default -std=c++11")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=x86-64 -fvisibility=default")
if (NOT OpenSSL_DIR)
EXECUTE_PROCESS(COMMAND brew --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
@@ -467,6 +499,11 @@ link_directories(${EASYLOGGING_LIBRARY_DIRS})
# Final setup for liblmdb
include_directories(${LMDB_INCLUDE})
# Final setup for Berkeley DB
if (BERKELEY_DB)
include_directories(${BDB_INCLUDE})
endif()
# Final setup for libunwind
include_directories(${LIBUNWIND_INCLUDE})
link_directories(${LIBUNWIND_LIBRARY_DIRS})
@@ -608,7 +645,7 @@ else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-strict-aliasing")
# if those don't work for your compiler, single it out where appropriate
if(CMAKE_BUILD_TYPE STREQUAL "Release" AND NOT OPENBSD)
if(CMAKE_BUILD_TYPE STREQUAL "Release")
set(C_SECURITY_FLAGS "${C_SECURITY_FLAGS} -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1")
set(CXX_SECURITY_FLAGS "${CXX_SECURITY_FLAGS} -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1")
endif()
@@ -620,7 +657,7 @@ else()
add_cxx_flag_if_supported(-Wformat-security CXX_SECURITY_FLAGS)
# -fstack-protector
if (NOT OPENBSD AND NOT (WIN32 AND (CMAKE_C_COMPILER_ID STREQUAL "GNU" AND CMAKE_C_COMPILER_VERSION VERSION_LESS 9.1)))
if (NOT WIN32)
add_c_flag_if_supported(-fstack-protector C_SECURITY_FLAGS)
add_cxx_flag_if_supported(-fstack-protector CXX_SECURITY_FLAGS)
add_c_flag_if_supported(-fstack-protector-strong C_SECURITY_FLAGS)
@@ -628,11 +665,9 @@ else()
endif()
# New in GCC 8.2
if (NOT OPENBSD AND NOT (WIN32 AND (CMAKE_C_COMPILER_ID STREQUAL "GNU" AND CMAKE_C_COMPILER_VERSION VERSION_LESS 9.1)))
if (NOT WIN32)
add_c_flag_if_supported(-fcf-protection=full C_SECURITY_FLAGS)
add_cxx_flag_if_supported(-fcf-protection=full CXX_SECURITY_FLAGS)
endif()
if (NOT WIN32 AND NOT OPENBSD)
add_c_flag_if_supported(-fstack-clash-protection C_SECURITY_FLAGS)
add_cxx_flag_if_supported(-fstack-clash-protection CXX_SECURITY_FLAGS)
endif()
@@ -644,8 +679,8 @@ else()
endif()
# linker
if (NOT (WIN32 AND (CMAKE_C_COMPILER_ID STREQUAL "GNU" AND CMAKE_C_COMPILER_VERSION VERSION_LESS 9.1)))
# Windows binaries die on startup with PIE when compiled with GCC <9.x
if (NOT WIN32)
# Windows binaries die on startup with PIE
add_linker_flag_if_supported(-pie LD_SECURITY_FLAGS)
endif()
add_linker_flag_if_supported(-Wl,-z,relro LD_SECURITY_FLAGS)
@@ -669,7 +704,6 @@ else()
if (WIN32)
add_linker_flag_if_supported(-Wl,--dynamicbase LD_SECURITY_FLAGS)
add_linker_flag_if_supported(-Wl,--nxcompat LD_SECURITY_FLAGS)
add_linker_flag_if_supported(-Wl,--high-entropy-va LD_SECURITY_FLAGS)
endif()
message(STATUS "Using C security hardening flags: ${C_SECURITY_FLAGS}")
@@ -844,10 +878,10 @@ if (${BOOST_IGNORE_SYSTEM_PATHS} STREQUAL "ON")
endif()
set(OLD_LIB_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
set(Boost_NO_BOOST_CMAKE ON)
if(STATIC)
if(MINGW)
set(CMAKE_FIND_LIBRARY_SUFFIXES .a)
set(Boost_NO_BOOST_CMAKE ON)
endif()
set(Boost_USE_STATIC_LIBS ON)
@@ -915,7 +949,7 @@ if (HIDAPI_FOUND OR LibUSB_COMPILE_TEST_PASSED)
endif()
option(USE_READLINE "Build with GNU readline support." ON)
if(USE_READLINE AND NOT DEPENDS)
if(USE_READLINE)
find_package(Readline)
if(READLINE_FOUND AND GNU_READLINE_FOUND)
add_definitions(-DHAVE_READLINE)
@@ -925,14 +959,6 @@ if(USE_READLINE AND NOT DEPENDS)
else()
message(STATUS "Could not find GNU readline library so building without readline support")
endif()
elseif(USE_READLINE AND DEPENDS AND NOT MINGW)
find_path(Readline_INCLUDE_PATH readline/readline.h)
find_library(Readline_LIBRARY readline)
find_library(Terminfo_LIBRARY tinfo)
set(Readline_LIBRARY "${Readline_LIBRARY};${Terminfo_LIBRARY}")
set(GNU_READLINE_LIBRARY ${Readline_LIBRARY})
add_definitions(-DHAVE_READLINE)
set(EPEE_READLINE epee_readline)
endif()
if(ANDROID)
@@ -946,15 +972,14 @@ if(CMAKE_C_COMPILER_ID STREQUAL "Clang" AND ARCH_WIDTH EQUAL "32" AND NOT IOS AN
endif()
endif()
find_path(ZMQ_INCLUDE_PATH zmq.h)
find_path(ZMQ_INCLUDE_PATH zmq.hpp)
find_library(ZMQ_LIB zmq)
find_library(PGM_LIBRARY pgm)
find_library(NORM_LIBRARY norm)
find_library(PROTOLIB_LIBRARY protolib)
find_library(SODIUM_LIBRARY sodium)
if(NOT ZMQ_INCLUDE_PATH)
message(FATAL_ERROR "Could not find required header zmq.h")
message(FATAL_ERROR "Could not find required header zmq.hpp")
endif()
if(NOT ZMQ_LIB)
message(FATAL_ERROR "Could not find required libzmq")
@@ -965,9 +990,6 @@ endif()
if(NORM_LIBRARY)
set(ZMQ_LIB "${ZMQ_LIB};${NORM_LIBRARY}")
endif()
if(PROTOLIB_LIBRARY)
set(ZMQ_LIB "${ZMQ_LIB};${PROTOLIB_LIBRARY}")
endif()
if(SODIUM_LIBRARY)
set(ZMQ_LIB "${ZMQ_LIB};${SODIUM_LIBRARY}")
endif()
@@ -975,18 +997,8 @@ endif()
add_subdirectory(contrib)
add_subdirectory(src)
find_package(PythonInterp)
if(BUILD_TESTS)
message(STATUS "Building tests")
add_subdirectory(tests)
else()
message(STATUS "Not building tests")
endif()
if(BUILD_DEBUG_UTILITIES)
message(STATUS "Building debug utilities")
else()
message(STATUS "Not building debug utilities")
endif()
if(BUILD_DOCUMENTATION)
@@ -1022,11 +1034,3 @@ option(INSTALL_VENDORED_LIBUNBOUND "Install libunbound binary built from source
CHECK_C_COMPILER_FLAG(-std=c11 HAVE_C11)
find_package(PythonInterp)
find_program(iwyu_tool_path NAMES iwyu_tool.py iwyu_tool)
if (iwyu_tool_path AND PYTHONINTERP_FOUND)
add_custom_target(iwyu
COMMAND "${PYTHON_EXECUTABLE}" "${iwyu_tool_path}" -p "${CMAKE_BINARY_DIR}" -- --no_fwd_decls
COMMENT "Running include-what-you-use tool"
VERBATIM
)
endif()

View File

@@ -29,9 +29,9 @@ ENV CFLAGS='-fPIC'
ENV CXXFLAGS='-fPIC'
#Cmake
ARG CMAKE_VERSION=3.14.6
ARG CMAKE_VERSION=3.14.0
ARG CMAKE_VERSION_DOT=v3.14
ARG CMAKE_HASH=4e8ea11cabe459308671b476469eace1622e770317a15951d7b55a82ccaaccb9
ARG CMAKE_HASH=aa76ba67b3c2af1946701f847073f4652af5cbd9f141f221c97af99127e75502
RUN set -ex \
&& curl -s -O https://cmake.org/files/${CMAKE_VERSION_DOT}/cmake-${CMAKE_VERSION}.tar.gz \
&& echo "${CMAKE_HASH} cmake-${CMAKE_VERSION}.tar.gz" | sha256sum -c \
@@ -42,9 +42,9 @@ RUN set -ex \
&& make install
## Boost
ARG BOOST_VERSION=1_70_0
ARG BOOST_VERSION_DOT=1.70.0
ARG BOOST_HASH=430ae8354789de4fd19ee52f3b1f739e1fba576f0aded0897c3c2bc00fb38778
ARG BOOST_VERSION=1_69_0
ARG BOOST_VERSION_DOT=1.69.0
ARG BOOST_HASH=8f32d4617390d1c2d16f26a27ab60d97807b35440d45891fa340fc2648b04406
RUN set -ex \
&& curl -s -L -o boost_${BOOST_VERSION}.tar.bz2 https://dl.bintray.com/boostorg/release/${BOOST_VERSION_DOT}/source/boost_${BOOST_VERSION}.tar.bz2 \
&& echo "${BOOST_HASH} boost_${BOOST_VERSION}.tar.bz2" | sha256sum -c \
@@ -69,8 +69,8 @@ RUN set -ex \
ENV OPENSSL_ROOT_DIR=/usr/local/openssl-${OPENSSL_VERSION}
# ZMQ
ARG ZMQ_VERSION=v4.3.2
ARG ZMQ_HASH=a84ffa12b2eb3569ced199660bac5ad128bff1f0
ARG ZMQ_VERSION=v4.3.1
ARG ZMQ_HASH=2cb1240db64ce1ea299e00474c646a2453a8435b
RUN set -ex \
&& git clone https://github.com/zeromq/libzmq.git -b ${ZMQ_VERSION} \
&& cd libzmq \
@@ -82,8 +82,8 @@ RUN set -ex \
&& ldconfig
# zmq.hpp
ARG CPPZMQ_VERSION=v4.4.1
ARG CPPZMQ_HASH=f5b36e563598d48fcc0d82e589d3596afef945ae
ARG CPPZMQ_VERSION=v4.3.0
ARG CPPZMQ_HASH=213da0b04ae3b4d846c9abc46bab87f86bfb9cf4
RUN set -ex \
&& git clone https://github.com/zeromq/cppzmq.git -b ${CPPZMQ_VERSION} \
&& cd cppzmq \
@@ -103,8 +103,8 @@ RUN set -ex \
&& make install
# Sodium
ARG SODIUM_VERSION=1.0.18
ARG SODIUM_HASH=4f5e89fa84ce1d178a6765b8b46f2b6f91216677
ARG SODIUM_VERSION=1.0.17
ARG SODIUM_HASH=b732443c442239c2e0184820e9b23cca0de0828c
RUN set -ex \
&& git clone https://github.com/jedisct1/libsodium.git -b ${SODIUM_VERSION} \
&& cd libsodium \
@@ -116,8 +116,8 @@ RUN set -ex \
&& make install
# Udev
ARG UDEV_VERSION=v3.2.8
ARG UDEV_HASH=d69f3f28348123ab7fa0ebac63ec2fd16800c5e0
ARG UDEV_VERSION=v3.2.7
ARG UDEV_HASH=4758e346a14126fc3a964de5831e411c27ebe487
RUN set -ex \
&& git clone https://github.com/gentoo/eudev -b ${UDEV_VERSION} \
&& cd eudev \
@@ -152,8 +152,8 @@ RUN set -ex \
&& make install
# Protobuf
ARG PROTOBUF_VERSION=v3.7.1
ARG PROTOBUF_HASH=6973c3a5041636c1d8dc5f7f6c8c1f3c15bc63d6
ARG PROTOBUF_VERSION=v3.7.0
ARG PROTOBUF_HASH=582743bf40c5d3639a70f98f183914a2c0cd0680
RUN set -ex \
&& git clone https://github.com/protocolbuffers/protobuf -b ${PROTOBUF_VERSION} \
&& cd protobuf \

View File

@@ -1,165 +0,0 @@
# Levin Protocol
This is a document explaining the current design of the levin protocol, as
used by Monero. The protocol is largely inherited from cryptonote, but has
undergone some changes.
This document also may differ from the `struct bucket_head2` in Monero's
code slightly - the spec here is slightly more strict to allow for
extensibility.
One of the goals of this document is to clearly indicate what is being sent
"on the wire" to identify metadata that could de-anonymize users over I2P/Tor.
These issues will be addressed as they are found. See `ANONMITY_NETWORKS.md` in
the top-level folder for any outstanding issues.
> This document does not currently list all data being sent by the monero
> protocol, that portion is a work-in-progress. Please take the time to do it
> if interested in learning about Monero p2p traffic!
## Header
This header is sent for every Monero p2p message.
```
0 1 2 3
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x01 | 0x21 | 0x01 | 0x01 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x01 | 0x01 | 0x01 | 0x01 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| E. Response | Command
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Return Code
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Q|S|B|E| Reserved
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x01 | 0x00 | 0x00 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x00 |
+-+-+-+-+-+-+-+-+
```
### Signature
The first 8 bytes are the "signature" which helps identify the protocol (in
case someone connected to the wrong port, etc). The comments indicate that byte
sequence is from "benders nightmare".
This also can be used by deep packet inspection (DPI) engines to identify
Monero when the link is not encrypted. SSL has been proposed as a means to
mitigate this issue, but BIP-151 or the Noise protocol should also be considered.
### Length
The length is an unsigned 64-bit little endian integer. The length does _not_
include the header.
The implementation currently rejects received messages that exceed 100 MB
(base 10) by default.
### Expect Response
A zero-byte if no response is expected from the peer, and a non-zero byte if a
response is expected from the peer. Peers must respond to requests with this
flag in the same order that they were received, however, other messages can be
sent between responses.
There are some commands in the
[cryptonote protocol](#cryptonote-protocol-commands) where a response is
expected from the peer, but this flag is not set. Those responses are returned
as notify messages and can be sent in any order by the peer.
### Command
An unsigned 32-bit little endian integer representing the Monero specific
command being invoked.
### Return Code
A signed 32-bit little integer integer representing the response from the peer
from the last command that was invoked. This is `0` for request messages.
### Flags
* `Q` - Bit is set if the message is a request.
* `S` - Bit is set if the message is a response.
* `B` - Bit is set if this is a the beginning of a [fragmented message](#fragmented-messages).
* `E` - Bit is set if this is the end of a [fragmented message](#fragmented-messages).
### Version
A fixed value of `1` as an unsigned 32-bit little endian integer.
## Message Flow
The protocol can be subdivided into: (1) notifications, (2) requests,
(3) responses, (4) fragmented messages, and (5) dummy messages. Response
messages must be sent in the same order that a peer issued a request message.
A peer does not have to send a response immediately following a request - any
other message type can be sent instead.
### Notifications
Notifications are one-way messages that can be sent at any time without
an expectation of a response from the peer. The `Q` bit must be set, the `S`,
`B` and `E` bits must be unset, and the `Expect Response` field must be zeroed.
Some notifications must be in response to other notifications. This is not
part of the levin messaging layer, and is described in the
[commands](#commands) section.
### Requests
Requests are the basis of the admin protocol for Monero. The `Q` bit must be
set, the `S`, `B` and `E` bits must be unset, and the `Expect Response` field
must be non-zero. The peer is expected to send a response message with the same
`command` number.
### Responses
Response message can only be sent after a peer first issues a request message.
Responses must have the `S` bit set, the `Q`, `B` and `E` bits unset, and have
a zeroed `Expect Response` field. The `Command` field must be the same value
that was sent in the request message. The `Return Code` is specific to the
`Command` being issued (see [commands])(#commands)).
### Fragmented
Fragmented messages were introduced for the "white noise" feature for i2p/tor.
A transaction can be sent in fragments to conceal when "real" data is being
sent instead of dummy messages. Only one fragmented message can be sent at a
time, and bits `B` and `E` are never set at the same time
(see [dummy messages](#dummy)). The re-constructed message must contain a
levin header for a different (non-fragment) message type.
The `Q` and `S` bits are never set and the `Expect Response` field must always
be zero. The first fragment has the `B` bit set, neither `B` nor `E` is set for
"middle" fragments, and `E` is set for the last fragment.
### Dummy
Dummy messages have the `B` and `E` bits set, the `Q` and `S` bits unset, and
the `Expect Reponse` field zeroed. When a message of this type is received, the
contents can be safely ignored.
## Commands
### P2P (Admin) Commands
#### (`1001` Request) Handshake
#### (`1001` Response) Handshake
#### (`1002` Request) Timed Sync
#### (`1002` Response) Timed Sync
#### (`1003` Request) Ping
#### (`1003` Response) Ping
#### (`1004` Request) Stat Info
#### (`1004` Response) Stat Info
#### (`1005` Request) Network State
#### (`1005` Response) Network State
#### (`1006` Request) Peer ID
#### (`1006` Reponse) Peer ID
#### (`1007` Request) Support Flags
#### (`1007` Response) Support Flags
### Cryptonote Protocol Commands
#### (`2001` Notification) New Block
#### (`2002` Notification) New Transactions
#### (`2003` Notification) Request Get Objects
#### (`2004` Notification) Response Get Objects
#### (`2006` Notification) Request Chain
#### (`2007` Notification) Response Chain Entry
#### (`2008` Notification) New Fluffy Block
#### (`2009` Notification) Request Fluffy Missing TX

View File

@@ -63,10 +63,6 @@ debug-test:
mkdir -p $(builddir)/debug
cd $(builddir)/debug && cmake -D BUILD_TESTS=ON -D CMAKE_BUILD_TYPE=Debug $(topdir) && $(MAKE) && $(MAKE) ARGS="-E libwallet_api_tests" test
debug-test-asan:
mkdir -p $(builddir)/debug
cd $(builddir)/debug && cmake -D BUILD_TESTS=ON -D SANITIZE=ON -D CMAKE_BUILD_TYPE=Debug $(topdir) && $(MAKE) && $(MAKE) ARGS="-E libwallet_api_tests" test
debug-test-trezor:
mkdir -p $(builddir)/debug
cd $(builddir)/debug && cmake -D BUILD_TESTS=ON -D TREZOR_DEBUG=ON -D CMAKE_BUILD_TYPE=Debug $(topdir) && $(MAKE) && $(MAKE) ARGS="-E libwallet_api_tests" test

201
README.md
View File

@@ -10,7 +10,7 @@ Portions Copyright (c) 2012-2013 The Cryptonote developers.
- [Research](#research)
- [Announcements](#announcements)
- [Translations](#translations)
- [Build Status](#build-status)
- [Build](#build)
- [IMPORTANT](#important)
- [Coverage](#coverage)
- [Introduction](#introduction)
@@ -22,10 +22,6 @@ Portions Copyright (c) 2012-2013 The Cryptonote developers.
- [Release staging schedule and protocol](#release-staging-schedule-and-protocol)
- [Compiling Monero from source](#compiling-monero-from-source)
- [Dependencies](#dependencies)
- [Internationalization](#Internationalization)
- [Using Tor](#using-tor)
- [Debugging](#Debugging)
- [Known issues](#known-issues)
## Development resources
@@ -54,12 +50,12 @@ Our researchers are available on IRC in [#monero-research-lab on Freenode](https
- You can subscribe to an [announcement listserv](https://lists.getmonero.org) to get critical announcements from the Monero core team. The announcement list can be very helpful for knowing when software updates are needed.
## Translations
The CLI wallet is available in different languages. If you want to help translate it, see our self-hosted localization platform, Weblate, on [translate.getmonero.org](https://translate.getmonero.org/projects/CLI/). Every translation *must* be uploaded on the platform, pull requests directly editing the code in this repository will be closed. If you need help with Weblate, you can find a guide with screenshots [here](https://github.com/monero-ecosystem/monero-translations/blob/master/weblate.md).
The CLI wallet is available in different languages. If you want to help translate it, see our self-hosted localization platform, Pootle, on [translate.getmonero.org](https://translate.getmonero.org/projects/CLI/). Every translation *must* be uploaded on the platform, pull requests directly editing the code in this repository will be closed. If you need help with Pootle, you can find a guide with screenshots [here](https://github.com/monero-ecosystem/monero-translations/blob/master/pootle.md).
&nbsp;
If you need help/support/info about translations, contact the localization workgroup. You can find the complete list of contacts on the repository of the workgroup: [monero-translations](https://github.com/monero-ecosystem/monero-translations#contacts).
## Build Status
## Build
### IMPORTANT
@@ -151,17 +147,15 @@ Dates are provided in the format YYYY-MM-DD.
| 1546000 | 2018-04-06 | v7 | v0.12.0.0 | v0.12.3.0 | Cryptonight variant 1, ringsize >= 7, sorted inputs
| 1685555 | 2018-10-18 | v8 | v0.13.0.0 | v0.13.0.4 | max transaction size at half the penalty free block size, bulletproofs enabled, cryptonight variant 2, fixed ringsize [11](https://youtu.be/KOO5S4vxi0o)
| 1686275 | 2018-10-19 | v9 | v0.13.0.0 | v0.13.0.4 | bulletproofs required
| 1788000 | 2019-03-09 | v10 | v0.14.0.0 | v0.14.1.2 | New PoW based on Cryptonight-R, new block weight algorithm, slightly more efficient RingCT format
| 1788720 | 2019-03-10 | v11 | v0.14.0.0 | v0.14.1.2 | forbid old RingCT transaction format
| 1978433 | 2019-11-30* | v12 | v0.15.0.0 | v0.15.0.1 | New PoW based on RandomX, only allow >= 2 outputs, change to the block median used to calculate penalty, v1 coinbases are forbidden, rct sigs in coinbase forbidden, 10 block lock time for incoming transactions
| XXXXXXX | XXX-XX-XX | XXX | vX.XX.X.X | vX.XX.X.X | XXX |
| 1788000 | 2019-03-09 | v10 | v0.14.0.0 | v0.14.1.0 | New PoW based on Cryptonight-R, new block weight algorithm, slightly more efficient RingCT format
| 1788720 | 2019-03-10 | v11 | v0.14.0.0 | v0.14.1.0 | forbid old RingCT transaction format
| XXXXXXX | 2019-10-XX | XX | XXXXXXXXX | XXXXXXXXX | X
X's indicate that these details have not been determined as of commit date.
* indicates estimate as of commit date
## Release staging schedule and protocol
Approximately three months prior to a scheduled software upgrade, a branch from master will be created with the new release version tag. Pull requests that address bugs should then be made to both master and the new release branch. Pull requests that require extensive review and testing (generally, optimizations and new features) should *not* be made to the release branch.
Approximately three months prior to a scheduled software upgrade, a branch from Master will be created with the new release version tag. Pull requests that address bugs should then be made to both Master and the new release branch. Pull requests that require extensive review and testing (generally, optimizations and new features) should *not* be made to the release branch.
## Compiling Monero from source
@@ -176,31 +170,26 @@ sources are also used for statically-linked builds because distribution
packages often include only shared library binaries (`.so`) but not static
library archives (`.a`).
| Dep | Min. version | Vendored | Debian/Ubuntu pkg | Arch pkg | Fedora | Optional | Purpose |
| ------------ | ------------- | -------- | -------------------- | ------------ | ------------------- | -------- | --------------- |
| GCC | 4.7.3 | NO | `build-essential` | `base-devel` | `gcc` | NO | |
| CMake | 3.5 | NO | `cmake` | `cmake` | `cmake` | NO | |
| pkg-config | any | NO | `pkg-config` | `base-devel` | `pkgconf` | NO | |
| Boost | 1.58 | NO | `libboost-all-dev` | `boost` | `boost-devel` | NO | C++ libraries |
| OpenSSL | basically any | NO | `libssl-dev` | `openssl` | `openssl-devel` | NO | sha256 sum |
| libzmq | 3.0.0 | NO | `libzmq3-dev` | `zeromq` | `zeromq-devel` | NO | ZeroMQ library |
| OpenPGM | ? | NO | `libpgm-dev` | `libpgm` | `openpgm-devel` | NO | For ZeroMQ |
| libnorm[2] | ? | NO | `libnorm-dev` | | | YES | For ZeroMQ |
| libunbound | 1.4.16 | YES | `libunbound-dev` | `unbound` | `unbound-devel` | NO | DNS resolver |
| libsodium | ? | NO | `libsodium-dev` | `libsodium` | `libsodium-devel` | NO | cryptography |
| libunwind | any | NO | `libunwind8-dev` | `libunwind` | `libunwind-devel` | YES | Stack traces |
| liblzma | any | NO | `liblzma-dev` | `xz` | `xz-devel` | YES | For libunwind |
| libreadline | 6.3.0 | NO | `libreadline6-dev` | `readline` | `readline-devel` | YES | Input editing |
| ldns | 1.6.17 | NO | `libldns-dev` | `ldns` | `ldns-devel` | YES | SSL toolkit |
| expat | 1.1 | NO | `libexpat1-dev` | `expat` | `expat-devel` | YES | XML parsing |
| GTest | 1.5 | YES | `libgtest-dev`[1] | `gtest` | `gtest-devel` | YES | Test suite |
| Doxygen | any | NO | `doxygen` | `doxygen` | `doxygen` | YES | Documentation |
| Graphviz | any | NO | `graphviz` | `graphviz` | `graphviz` | YES | Documentation |
| lrelease | ? | NO | `qttools5-dev-tools` | `qt5-tools` | `qt5-linguist` | YES | Translations |
| libhidapi | ? | NO | `libhidapi-dev` | `hidapi` | `hidapi-devel` | YES | Hardware wallet |
| libusb | ? | NO | `libusb-dev` | `libusb` | `libusb-devel` | YES | Hardware wallet |
| libprotobuf | ? | NO | `libprotobuf-dev` | `protobuf` | `protobuf-devel` | YES | Hardware wallet |
| protoc | ? | NO | `protobuf-compiler` | `protobuf` | `protobuf-compiler` | YES | Hardware wallet |
| Dep | Min. version | Vendored | Debian/Ubuntu pkg | Arch pkg | Fedora | Optional | Purpose |
| ------------ | ------------- | -------- | ------------------ | ------------ | ----------------- | -------- | -------------- |
| GCC | 4.7.3 | NO | `build-essential` | `base-devel` | `gcc` | NO | |
| CMake | 3.5 | NO | `cmake` | `cmake` | `cmake` | NO | |
| pkg-config | any | NO | `pkg-config` | `base-devel` | `pkgconf` | NO | |
| Boost | 1.58 | NO | `libboost-all-dev` | `boost` | `boost-devel` | NO | C++ libraries |
| OpenSSL | basically any | NO | `libssl-dev` | `openssl` | `openssl-devel` | NO | sha256 sum |
| libzmq | 3.0.0 | NO | `libzmq3-dev` | `zeromq` | `cppzmq-devel` | NO | ZeroMQ library |
| OpenPGM | ? | NO | `libpgm-dev` | `libpgm` | `openpgm-devel` | NO | For ZeroMQ |
| libnorm[2] | ? | NO | `libnorm-dev` | | ` | YES | For ZeroMQ |
| libunbound | 1.4.16 | YES | `libunbound-dev` | `unbound` | `unbound-devel` | NO | DNS resolver |
| libsodium | ? | NO | `libsodium-dev` | `libsodium` | `libsodium-devel` | NO | cryptography |
| libunwind | any | NO | `libunwind8-dev` | `libunwind` | `libunwind-devel` | YES | Stack traces |
| liblzma | any | NO | `liblzma-dev` | `xz` | `xz-devel` | YES | For libunwind |
| libreadline | 6.3.0 | NO | `libreadline6-dev` | `readline` | `readline-devel` | YES | Input editing |
| ldns | 1.6.17 | NO | `libldns-dev` | `ldns` | `ldns-devel` | YES | SSL toolkit |
| expat | 1.1 | NO | `libexpat1-dev` | `expat` | `expat-devel` | YES | XML parsing |
| GTest | 1.5 | YES | `libgtest-dev`[1] | `gtest` | `gtest-devel` | YES | Test suite |
| Doxygen | any | NO | `doxygen` | `doxygen` | `doxygen` | YES | Documentation |
| Graphviz | any | NO | `graphviz` | `graphviz` | `graphviz` | YES | Documentation |
[1] On Debian/Ubuntu `libgtest-dev` only includes sources and headers. You must
@@ -209,7 +198,7 @@ build the library binary manually. This can be done with the following command `
Install all dependencies at once on Debian/Ubuntu:
``` sudo apt update && sudo apt install build-essential cmake pkg-config libboost-all-dev libssl-dev libzmq3-dev libunbound-dev libsodium-dev libunwind8-dev liblzma-dev libreadline6-dev libldns-dev libexpat1-dev doxygen graphviz libpgm-dev qttools5-dev-tools libhidapi-dev libusb-dev libprotobuf-dev protobuf-compiler ```
``` sudo apt update && sudo apt install build-essential cmake pkg-config libboost-all-dev libssl-dev libzmq3-dev libunbound-dev libsodium-dev libunwind8-dev liblzma-dev libreadline6-dev libldns-dev libexpat1-dev doxygen graphviz libpgm-dev```
Install all dependencies at once on macOS with the provided Brewfile:
``` brew update && brew bundle --file=contrib/brew/Brewfile ```
@@ -239,7 +228,7 @@ invokes cmake commands as needed.
```bash
cd monero
git checkout release-v0.15
git checkout release-v0.14
make
```
@@ -315,7 +304,7 @@ Tested on a Raspberry Pi Zero with a clean install of minimal Raspbian Stretch (
```bash
git clone https://github.com/monero-project/monero.git
cd monero
git checkout tags/v0.15.0.1
git checkout tags/v0.14.1.0
```
* Build:
@@ -432,10 +421,10 @@ application.
cd monero
```
* If you would like a specific [version/tag](https://github.com/monero-project/monero/tags), do a git checkout for that version. eg. 'v0.15.0.1'. If you don't care about the version and just want binaries from master, skip this step:
* If you would like a specific [version/tag](https://github.com/monero-project/monero/tags), do a git checkout for that version. eg. 'v0.14.1.0'. If you don't care about the version and just want binaries from master, skip this step:
```bash
git checkout v0.15.0.1
git checkout v0.14.1.0
```
* If you are on a 64-bit system, run:
@@ -474,10 +463,100 @@ We expect to add Monero into the ports tree in the near future, which will aid i
### On OpenBSD:
#### OpenBSD < 6.2
This has been tested on OpenBSD 5.8.
You will need to add a few packages to your system. `pkg_add db cmake gcc gcc-libs g++ gtest`.
The doxygen and graphviz packages are optional and require the xbase set.
The Boost package has a bug that will prevent librpc.a from building correctly. In order to fix this, you will have to Build boost yourself from scratch. Follow the directions here (under "Building Boost"):
https://github.com/bitcoin/bitcoin/blob/master/doc/build-openbsd.md
You will have to add the serialization, date_time, and regex modules to Boost when building as they are needed by Monero.
To build: `env CC=egcc CXX=eg++ CPP=ecpp DEVELOPER_LOCAL_TOOLS=1 BOOST_ROOT=/path/to/the/boost/you/built make release-static-64`
#### OpenBSD 6.2 and 6.3
You will need to add a few packages to your system. `pkg_add cmake zeromq libiconv`.
The doxygen and graphviz packages are optional and require the xbase set.
Build the Boost library using clang. This guide is derived from: https://github.com/bitcoin/bitcoin/blob/master/doc/build-openbsd.md
We assume you are compiling with a non-root user and you have `doas` enabled.
Note: do not use the boost package provided by OpenBSD, as we are installing boost to `/usr/local`.
```bash
# Create boost building directory
mkdir ~/boost
cd ~/boost
# Fetch boost source
ftp -o boost_1_64_0.tar.bz2 https://netcologne.dl.sourceforge.net/project/boost/boost/1.64.0/boost_1_64_0.tar.bz2
# MUST output: (SHA256) boost_1_64_0.tar.bz2: OK
echo "7bcc5caace97baa948931d712ea5f37038dbb1c5d89b43ad4def4ed7cb683332 boost_1_64_0.tar.bz2" | sha256 -c
tar xfj boost_1_64_0.tar.bz2
# Fetch and apply boost patches, required for OpenBSD
ftp -o boost_test_impl_execution_monitor_ipp.patch https://raw.githubusercontent.com/openbsd/ports/bee9e6df517077a7269ff0dfd57995f5c6a10379/devel/boost/patches/patch-boost_test_impl_execution_monitor_ipp
ftp -o boost_config_platform_bsd_hpp.patch https://raw.githubusercontent.com/openbsd/ports/90658284fb786f5a60dd9d6e8d14500c167bdaa0/devel/boost/patches/patch-boost_config_platform_bsd_hpp
# MUST output: (SHA256) boost_config_platform_bsd_hpp.patch: OK
echo "1f5e59d1154f16ee1e0cc169395f30d5e7d22a5bd9f86358f738b0ccaea5e51d boost_config_platform_bsd_hpp.patch" | sha256 -c
# MUST output: (SHA256) boost_test_impl_execution_monitor_ipp.patch: OK
echo "30cec182a1437d40c3e0bd9a866ab5ddc1400a56185b7e671bb3782634ed0206 boost_test_impl_execution_monitor_ipp.patch" | sha256 -c
cd boost_1_64_0
patch -p0 < ../boost_test_impl_execution_monitor_ipp.patch
patch -p0 < ../boost_config_platform_bsd_hpp.patch
# Start building boost
echo 'using clang : : c++ : <cxxflags>"-fvisibility=hidden -fPIC" <linkflags>"" <archiver>"ar" <striper>"strip" <ranlib>"ranlib" <rc>"" : ;' > user-config.jam
./bootstrap.sh --without-icu --with-libraries=chrono,filesystem,program_options,system,thread,test,date_time,regex,serialization,locale --with-toolset=clang
./b2 toolset=clang cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++" -sICONV_PATH=/usr/local
doas ./b2 -d0 runtime-link=shared threadapi=pthread threading=multi link=static variant=release --layout=tagged --build-type=complete --user-config=user-config.jam -sNO_BZIP2=1 -sICONV_PATH=/usr/local --prefix=/usr/local install
```
Build the cppzmq bindings.
We assume you are compiling with a non-root user and you have `doas` enabled.
```bash
# Create cppzmq building directory
mkdir ~/cppzmq
cd ~/cppzmq
# Fetch cppzmq source
ftp -o cppzmq-4.2.3.tar.gz https://github.com/zeromq/cppzmq/archive/v4.2.3.tar.gz
# MUST output: (SHA256) cppzmq-4.2.3.tar.gz: OK
echo "3e6b57bf49115f4ae893b1ff7848ead7267013087dc7be1ab27636a97144d373 cppzmq-4.2.3.tar.gz" | sha256 -c
tar xfz cppzmq-4.2.3.tar.gz
# Start building cppzmq
cd cppzmq-4.2.3
mkdir build
cd build
cmake ..
doas make install
```
Build monero:
```bash
env DEVELOPER_LOCAL_TOOLS=1 BOOST_ROOT=/usr/local make release-static
```
#### OpenBSD >= 6.4
You will need to add a few packages to your system. `pkg_add cmake gmake zeromq cppzmq libiconv boost`.
The `doxygen` and `graphviz` packages are optional and require the xbase set.
Running the test suite also requires `py-requests` package.
The doxygen and graphviz packages are optional and require the xbase set.
Build monero: `env DEVELOPER_LOCAL_TOOLS=1 BOOST_ROOT=/usr/local gmake release-static`
@@ -545,8 +624,6 @@ You can also cross-compile static binaries on Linux for Windows and macOS with t
* Requires: `g++-arm-linux-gnueabihf`
* ```make depends target=aarch64-linux-gnu``` for armv8 binaries.
* Requires: `g++-aarch64-linux-gnu`
* ```make depends target=riscv64-linux-gnu``` for RISC V 64 bit binaries.
* Requires: `g++-riscv64-linux-gnu`
The required packages are the names for each toolchain on apt. Depending on your distro, they may have different names.
@@ -560,12 +637,13 @@ The produced binaries still link libc dynamically. If the binary is compiled on
Packages are available for
* Debian Bullseye and Sid
* Ubuntu and [snap supported](https://snapcraft.io/docs/core/install) systems, via a community contributed build.
```bash
sudo apt install monero
snap install monero --beta
```
More info and versions in the [Debian package tracker](https://tracker.debian.org/pkg/monero).
Installing a snap is very quick. Snaps are secure. They are isolated with all of their dependencies. Snaps also auto update when a new version is released.
* Arch Linux (via [AUR](https://aur.archlinux.org/)):
- Stable release: [`monero`](https://aur.archlinux.org/packages/monero)
@@ -726,12 +804,6 @@ gdb /path/to/monerod /path/to/dumpfile`
Print the stack trace with `bt`
* If a program crashed and cores are managed by systemd, the following can also get a stack trace for that crash:
```bash
coredumpctl -1 gdb
```
#### To run monero within gdb:
Type `gdb /path/to/monerod`
@@ -773,20 +845,3 @@ The output of `mdb_stat -ea <path to blockchain dir>` will indicate inconsistenc
The output of `mdb_dump -s blocks <path to blockchain dir>` and `mdb_dump -s block_info <path to blockchain dir>` is useful for indicating whether blocks and block_info contain the same keys.
These records are dumped as hex data, where the first line is the key and the second line is the data.
# Known Issues
## Protocols
### Socket-based
Because of the nature of the socket-based protocols that drive monero, certain protocol weaknesses are somewhat unavoidable at this time. While these weaknesses can theoretically be fully mitigated, the effort required (the means) may not justify the ends. As such, please consider taking the following precautions if you are a monero node operator:
- Run `monerod` on a "secured" machine. If operational security is not your forte, at a very minimum, have a dedicated a computer running `monerod` and **do not** browse the web, use email clients, or use any other potentially harmful apps on your `monerod` machine. **Do not click links or load URL/MUA content on the same machine**. Doing so may potentially exploit weaknesses in commands which accept "localhost" and "127.0.0.1".
- If you plan on hosting a public "remote" node, start `monerod` with `--restricted-rpc`. This is a must.
### Blockchain-based
Certain blockchain "features" can be considered "bugs" if misused correctly. Consequently, please consider the following:
- When receiving monero, be aware that it may be locked for an arbitrary time if the sender elected to, preventing you from spending that monero until the lock time expires. You may want to hold off acting upon such a transaction until the unlock time lapses. To get a sense of that time, you can consider the remaining blocktime until unlock as seen in the `show_transfers` command.

View File

@@ -159,7 +159,7 @@ if(Protobuf_FOUND AND USE_DEVICE_TREZOR AND TREZOR_PYTHON AND Protobuf_COMPILE_T
set(TREZOR_LIBUSB_LIBRARIES "")
if(LibUSB_COMPILE_TEST_PASSED)
list(APPEND TREZOR_LIBUSB_LIBRARIES ${LibUSB_LIBRARIES} ${LIBUSB_DEP_LINKER})
list(APPEND TREZOR_LIBUSB_LIBRARIES ${LibUSB_LIBRARIES})
message(STATUS "Trezor compatible LibUSB found at: ${LibUSB_INCLUDE_DIRS}")
endif()
@@ -174,7 +174,7 @@ if(Protobuf_FOUND AND USE_DEVICE_TREZOR AND TREZOR_PYTHON AND Protobuf_COMPILE_T
if (TREZOR_LIBUSB_LIBRARIES)
list(APPEND TREZOR_DEP_LIBS ${TREZOR_LIBUSB_LIBRARIES})
string(APPEND TREZOR_DEP_LINKER " -lusb-1.0 ${LIBUSB_DEP_LINKER}")
string(APPEND TREZOR_DEP_LINKER " -lusb-1.0")
endif()
endif()
endif()

View File

@@ -0,0 +1,25 @@
# - Try to find Berkeley DB
# Once done this will define
#
# BERKELEY_DB_FOUND - system has Berkeley DB
# BERKELEY_DB_INCLUDE_DIR - the Berkeley DB include directory
# BERKELEY_DB_LIBRARIES - Link these to use Berkeley DB
# BERKELEY_DB_DEFINITIONS - Compiler switches required for using Berkeley DB
# Copyright (c) 2006, Alexander Dymo, <adymo@kdevelop.org>
#
# Redistribution and use is allowed according to the terms of the BSD license.
# For details see the accompanying COPYING-CMAKE-SCRIPTS file.
find_path(BERKELEY_DB_INCLUDE_DIR db_cxx.h
/usr/include/db4
/usr/local/include/db4
)
find_library(BERKELEY_DB_LIBRARIES NAMES db_cxx )
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Berkeley "Could not find Berkeley DB >= 4.1" BERKELEY_DB_INCLUDE_DIR BERKELEY_DB_LIBRARIES)
# show the BERKELEY_DB_INCLUDE_DIR and BERKELEY_DB_LIBRARIES variables only in the advanced view
mark_as_advanced(BERKELEY_DB_INCLUDE_DIR BERKELEY_DB_LIBRARIES )

View File

@@ -119,12 +119,6 @@ if ( LibUSB_FOUND )
find_library(IOKIT IOKit)
list(APPEND TEST_COMPILE_EXTRA_LIBRARIES ${IOKIT})
list(APPEND TEST_COMPILE_EXTRA_LIBRARIES ${COREFOUNDATION})
if(STATIC)
find_library(OBJC objc.a)
set(LIBUSB_DEP_LINKER ${OBJC})
list(APPEND TEST_COMPILE_EXTRA_LIBRARIES ${LIBUSB_DEP_LINKER})
endif()
endif()
endif()
if (WIN32)

64
cmake/GenVersion.cmake Normal file
View File

@@ -0,0 +1,64 @@
# Copyright (c) 2014-2019, The Monero Project
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are
# permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of
# conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice, this list
# of conditions and the following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors may be
# used to endorse or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
# THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
# Check what commit we're on
execute_process(COMMAND "${GIT}" rev-parse --short=9 HEAD RESULT_VARIABLE RET OUTPUT_VARIABLE COMMIT OUTPUT_STRIP_TRAILING_WHITESPACE)
if(RET)
# Something went wrong, set the version tag to -unknown
message(WARNING "Cannot determine current commit. Make sure that you are building either from a Git working tree or from a source archive.")
set(VERSIONTAG "unknown")
configure_file("src/version.cpp.in" "${TO}")
else()
string(SUBSTRING ${COMMIT} 0 9 COMMIT)
message(STATUS "You are currently on commit ${COMMIT}")
# Get all the tags
execute_process(COMMAND "${GIT}" rev-list --tags --max-count=1 --abbrev-commit RESULT_VARIABLE RET OUTPUT_VARIABLE TAGGEDCOMMIT OUTPUT_STRIP_TRAILING_WHITESPACE)
if(NOT TAGGEDCOMMIT)
message(WARNING "Cannot determine most recent tag. Make sure that you are building either from a Git working tree or from a source archive.")
set(VERSIONTAG "${COMMIT}")
else()
message(STATUS "The most recent tag was at ${TAGGEDCOMMIT}")
# Check if we're building that tagged commit or a different one
if(COMMIT STREQUAL TAGGEDCOMMIT)
message(STATUS "You are building a tagged release")
set(VERSIONTAG "release")
else()
message(STATUS "You are ahead of or behind a tagged release")
set(VERSIONTAG "${COMMIT}")
endif()
endif()
configure_file("src/version.cpp.in" "${TO}")
endif()

View File

@@ -1,79 +0,0 @@
# Copyright (c) 2014-2019, The Monero Project
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are
# permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of
# conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice, this list
# of conditions and the following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors may be
# used to endorse or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
# THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
# Check what commit we're on
function (get_version_tag_from_git GIT)
execute_process(COMMAND "${GIT}" rev-parse --short=9 HEAD
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
RESULT_VARIABLE RET
OUTPUT_VARIABLE COMMIT
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(RET)
# Something went wrong, set the version tag to -unknown
message(WARNING "Cannot determine current commit. Make sure that you are building either from a Git working tree or from a source archive.")
set(VERSIONTAG "unknown")
set(VERSION_IS_RELEASE "false")
else()
string(SUBSTRING ${COMMIT} 0 9 COMMIT)
message(STATUS "You are currently on commit ${COMMIT}")
# Get all the tags
execute_process(COMMAND "${GIT}" rev-list --tags --max-count=1 --abbrev-commit
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
RESULT_VARIABLE RET
OUTPUT_VARIABLE TAGGEDCOMMIT
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(NOT TAGGEDCOMMIT)
message(WARNING "Cannot determine most recent tag. Make sure that you are building either from a Git working tree or from a source archive.")
set(VERSIONTAG "${COMMIT}")
set(VERSION_IS_RELEASE "false")
else()
message(STATUS "The most recent tag was at ${TAGGEDCOMMIT}")
# Check if we're building that tagged commit or a different one
if(COMMIT STREQUAL TAGGEDCOMMIT)
message(STATUS "You are building a tagged release")
set(VERSIONTAG "release")
set(VERSION_IS_RELEASE "true")
else()
message(STATUS "You are ahead of or behind a tagged release")
set(VERSIONTAG "${COMMIT}")
set(VERSION_IS_RELEASE "false")
endif()
endif()
endif()
set(VERSIONTAG "${VERSIONTAG}" PARENT_SCOPE)
set(VERSION_IS_RELEASE "${VERSION_IS_RELEASE}" PARENT_SCOPE)
endfunction()

View File

@@ -26,25 +26,27 @@
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
function (write_version tag)
set(VERSIONTAG "${tag}" CACHE STRING "The tag portion of the Monero software version" FORCE)
function (write_static_version_header hash)
set(VERSIONTAG "${hash}")
configure_file("${CMAKE_SOURCE_DIR}/src/version.cpp.in" "${CMAKE_BINARY_DIR}/version.cpp")
endfunction ()
find_package(Git QUIET)
if ("$Format:$" STREQUAL "")
# We're in a tarball; use hard-coded variables.
set(VERSION_IS_RELEASE "true")
write_version("release")
write_static_version_header("release")
elseif (GIT_FOUND OR Git_FOUND)
message(STATUS "Found Git: ${GIT_EXECUTABLE}")
include(GitVersion)
get_version_tag_from_git("${GIT_EXECUTABLE}")
write_version("${VERSIONTAG}")
add_custom_command(
OUTPUT "${CMAKE_BINARY_DIR}/version.cpp"
COMMAND "${CMAKE_COMMAND}"
"-D" "GIT=${GIT_EXECUTABLE}"
"-D" "TO=${CMAKE_BINARY_DIR}/version.cpp"
"-P" "cmake/GenVersion.cmake"
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}")
else()
message(STATUS "WARNING: Git was not found!")
set(VERSION_IS_RELEASE "false")
write_version("unknown")
write_static_version_header("unknown")
endif ()
add_custom_target(genversion ALL
DEPENDS "${CMAKE_BINARY_DIR}/version.cpp")

View File

@@ -53,12 +53,8 @@ endif
host_arch=$(firstword $(subst -, ,$(canonical_host)))
host_vendor=$(word 2,$(subst -, ,$(canonical_host)))
full_host_os:=$(subst $(host_arch)-$(host_vendor)-,,$(canonical_host))
host_os:=$(findstring android,$(full_host_os))
ifeq ($(host_os),)
host_os:=$(findstring linux,$(full_host_os))
endif
host_os+=$(findstring darwin,$(full_host_os))
host_os+=$(findstring freebsd,$(full_host_os))
host_os+=$(findstring mingw32,$(full_host_os))
host_os:=$(strip $(host_os))
ifeq ($(host_os),)
@@ -75,15 +71,9 @@ endif
ifeq ($(host_os),linux)
host_cmake=Linux
endif
ifeq ($(host_os),freebsd)
host_cmake=FreeBSD
endif
ifeq ($(host_os),darwin)
host_cmake=Darwin
endif
ifeq ($(host_os),android)
host_cmake=Android
endif
AT_$(V):=
AT_:=@
@@ -227,6 +217,4 @@ download-win:
@$(MAKE) -s HOST=x86_64-w64-mingw32 download-one
download: download-osx download-linux download-win
$(foreach package,$(all_packages),$(eval $(call ext_add_stages,$(package))))
.PHONY: install cached download-one download-osx download-linux download-win download check-packages check-sources

View File

@@ -33,7 +33,6 @@ Common `host-platform-triplets` for cross compilation are:
- `x86_64-apple-darwin11` for MacOSX
- `arm-linux-gnueabihf` for Linux ARM 32 bit
- `aarch64-linux-gnu` for Linux ARM 64 bit
- `riscv64-linux-gnu` for Linux RISCV 64 bit
No other options are needed, the paths are automatically configured.

View File

@@ -17,4 +17,4 @@ define add_build_flags_func
build_$(build_arch)_$(build_os)_$1 += $(build_$(build_os)_$1)
build_$1=$$(build_$(build_arch)_$(build_os)_$1)
endef
$(foreach flags, CFLAGS CXXFLAGS ARFLAGS LDFLAGS, $(eval $(call add_build_flags_func,$(flags))))
$(foreach flags, CFLAGS CXXFLAGS LDFLAGS, $(eval $(call add_build_flags_func,$(flags))))

View File

@@ -10,7 +10,6 @@ $(1)_libtool=$($($(1)_type)_LIBTOOL)
$(1)_nm=$($($(1)_type)_NM)
$(1)_cflags=$($($(1)_type)_CFLAGS) $($($(1)_type)_$(release_type)_CFLAGS)
$(1)_cxxflags=$($($(1)_type)_CXXFLAGS) $($($(1)_type)_$(release_type)_CXXFLAGS)
$(1)_arflags=$($($(1)_type)_ARFLAGS) $($($(1)_type)_$(release_type)_ARFLAGS)
$(1)_ldflags=$($($(1)_type)_LDFLAGS) $($($(1)_type)_$(release_type)_LDFLAGS) -L$($($(1)_type)_prefix)/lib
$(1)_cppflags=$($($(1)_type)_CPPFLAGS) $($($(1)_type)_$(release_type)_CPPFLAGS) -I$($($(1)_type)_prefix)/include
$(1)_recipe_hash:=
@@ -103,11 +102,6 @@ $(1)_cxxflags+=$($(1)_cxxflags_$(host_arch)) $($(1)_cxxflags_$(host_arch)_$(rele
$(1)_cxxflags+=$($(1)_cxxflags_$(host_os)) $($(1)_cxxflags_$(host_os)_$(release_type))
$(1)_cxxflags+=$($(1)_cxxflags_$(host_arch)_$(host_os)) $($(1)_cxxflags_$(host_arch)_$(host_os)_$(release_type))
$(1)_arflags+=$($(1)_arflags_$(release_type))
$(1)_arflags+=$($(1)_arflags_$(host_arch)) $($(1)_arflags_$(host_arch)_$(release_type))
$(1)_arflags+=$($(1)_arflags_$(host_os)) $($(1)_arflags_$(host_os)_$(release_type))
$(1)_arflags+=$($(1)_arflags_$(host_arch)_$(host_os)) $($(1)_arflags_$(host_arch)_$(host_os)_$(release_type))
$(1)_cppflags+=$($(1)_cppflags_$(release_type))
$(1)_cppflags+=$($(1)_cppflags_$(host_arch)) $($(1)_cppflags_$(host_arch)_$(release_type))
$(1)_cppflags+=$($(1)_cppflags_$(host_os)) $($(1)_cppflags_$(host_os)_$(release_type))
@@ -149,9 +143,6 @@ endif
ifneq ($($(1)_ar),)
$(1)_autoconf += AR="$$($(1)_ar)"
endif
ifneq ($($(1)_arflags),)
$(1)_autoconf += ARFLAGS="$$($(1)_arflags)"
endif
ifneq ($($(1)_cflags),)
$(1)_autoconf += CFLAGS="$$($(1)_cflags)"
endif
@@ -222,14 +213,6 @@ $(1): | $($(1)_cached_checksum)
endef
stages = fetched extracted preprocessed configured built staged postprocessed cached cached_checksum
define ext_add_stages
$(foreach stage,$(stages),
$(1)_$(stage): $($(1)_$(stage))
.PHONY: $(1)_$(stage))
endef
# These functions create the build targets for each package. They must be
# broken down into small steps so that each part is done for all packages
# before moving on to the next step. Otherwise, a package's info

View File

@@ -1,22 +0,0 @@
ANDROID_API=21
ifeq ($(host_arch),arm)
host_toolchain=arm-linux-androideabi-
endif
android_CC=$(host_toolchain)clang
android_CXX=$(host_toolchain)clang++
android_RANLIB=:
android_CFLAGS=-pipe
android_CXXFLAGS=$(android_CFLAGS)
android_ARFLAGS=crsD
android_release_CFLAGS=-O2
android_release_CXXFLAGS=$(android_release_CFLAGS)
android_debug_CFLAGS=-g -O0
android_debug_CXXFLAGS=$(android_debug_CFLAGS)
android_native_toolchain=android_ndk

View File

@@ -7,7 +7,6 @@ darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) --sys
darwin_CFLAGS=-pipe
darwin_CXXFLAGS=$(darwin_CFLAGS)
darwin_ARFLAGS=cr
darwin_release_CFLAGS=-O1
darwin_release_CXXFLAGS=$(darwin_release_CFLAGS)

View File

@@ -23,4 +23,4 @@ host_$(release_type)_$1 = $$($(host_arch)_$(host_os)_$(release_type)_$1)
endef
$(foreach tool,CC CXX AR RANLIB STRIP NM LIBTOOL OTOOL INSTALL_NAME_TOOL,$(eval $(call add_host_tool_func,$(tool))))
$(foreach flags,CFLAGS CXXFLAGS ARFLAGS CPPFLAGS LDFLAGS, $(eval $(call add_host_flags_func,$(flags))))
$(foreach flags,CFLAGS CXXFLAGS CPPFLAGS LDFLAGS, $(eval $(call add_host_flags_func,$(flags))))

View File

@@ -1,18 +0,0 @@
freebsd_CC=clang-8
freebsd_CXX=clang++-8
freebsd_AR=ar
freebsd_RANLIB=ranlib
freebsd_NM=nm
freebsd_CFLAGS=-pipe
freebsd_CXXFLAGS=$(freebsd_CFLAGS)
freebsd_ARFLAGS=cr
freebsd_release_CFLAGS=-O2
freebsd_release_CXXFLAGS=$(freebsd_release_CFLAGS)
freebsd_debug_CFLAGS=-g -O0
freebsd_debug_CXXFLAGS=$(freebsd_debug_CFLAGS)
freebsd_native_toolchain=freebsd_base

View File

@@ -1,6 +1,5 @@
linux_CFLAGS=-pipe
linux_CXXFLAGS=$(linux_CFLAGS)
linux_ARFLAGS=cr
linux_release_CFLAGS=-O2
linux_release_CXXFLAGS=$(linux_release_CFLAGS)

View File

@@ -1,6 +1,5 @@
mingw32_CFLAGS=-pipe
mingw32_CXXFLAGS=$(mingw32_CFLAGS)
mingw32_ARFLAGS=cr
mingw32_release_CFLAGS=-O2
mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS)

View File

@@ -1,22 +0,0 @@
package=android_ndk
$(package)_version=17b
$(package)_download_path=https://dl.google.com/android/repository/
$(package)_file_name=android-ndk-r$($(package)_version)-linux-x86_64.zip
$(package)_sha256_hash=5dfbbdc2d3ba859fed90d0e978af87c71a91a5be1f6e1c40ba697503d48ccecd
define $(package)_set_vars
$(package)_config_opts_arm=--arch arm
$(package)_config_opts_aarch64=--arch arm64
endef
define $(package)_extract_cmds
echo $($(package)_sha256_hash) $($(1)_source_dir)/$($(package)_file_name) | sha256sum -c &&\
unzip -q $($(1)_source_dir)/$($(package)_file_name)
endef
define $(package)_stage_cmds
android-ndk-r$($(package)_version)/build/tools/make_standalone_toolchain.py --api 21 \
--install-dir $(build_prefix) --stl=libc++ $($(package)_config_opts) &&\
mv $(build_prefix) $($(package)_staging_dir)/$(host_prefix)
endef

View File

@@ -3,8 +3,6 @@ $(package)_version=1_64_0
$(package)_download_path=https://dl.bintray.com/boostorg/release/1.64.0/source/
$(package)_file_name=$(package)_$($(package)_version).tar.bz2
$(package)_sha256_hash=7bcc5caace97baa948931d712ea5f37038dbb1c5d89b43ad4def4ed7cb683332
$(package)_dependencies=libiconv
$(package)_patches=fix_aroptions.patch
define $(package)_set_vars
$(package)_config_opts_release=variant=release
@@ -12,7 +10,6 @@ $(package)_config_opts_debug=variant=debug
$(package)_config_opts=--layout=tagged --build-type=complete --user-config=user-config.jam
$(package)_config_opts+=threading=multi link=static -sNO_BZIP2=1 -sNO_ZLIB=1
$(package)_config_opts_linux=threadapi=pthread runtime-link=shared
$(package)_config_opts_android=threadapi=pthread runtime-link=static target-os=android
$(package)_config_opts_darwin=--toolset=darwin-4.2.1 runtime-link=shared
$(package)_config_opts_mingw32=binary-format=pe target-os=windows threadapi=win32 runtime-link=static
$(package)_config_opts_x86_64_mingw32=address-model=64
@@ -25,12 +22,10 @@ $(package)_archiver_darwin=$($(package)_libtool)
$(package)_config_libraries=chrono,filesystem,program_options,system,thread,test,date_time,regex,serialization,locale
$(package)_cxxflags=-std=c++11
$(package)_cxxflags_linux=-fPIC
$(package)_cxxflags_freebsd=-fPIC
endef
define $(package)_preprocess_cmds
patch -p1 < $($(package)_patch_dir)/fix_aroptions.patch &&\
echo "using $(boost_toolset_$(host_os)) : : $($(package)_cxx) : <cxxflags>\"$($(package)_cxxflags) $($(package)_cppflags)\" <linkflags>\"$($(package)_ldflags)\" <archiver>\"$(boost_archiver_$(host_os))\" <arflags>\"$($(package)_arflags)\" <striper>\"$(host_STRIP)\" <ranlib>\"$(host_RANLIB)\" <rc>\"$(host_WINDRES)\" : ;" > user-config.jam
echo "using $(boost_toolset_$(host_os)) : : $($(package)_cxx) : <cxxflags>\"$($(package)_cxxflags) $($(package)_cppflags)\" <linkflags>\"$($(package)_ldflags)\" <archiver>\"$(boost_archiver_$(host_os))\" <striper>\"$(host_STR IP)\" <ranlib>\"$(host_RANLIB)\" <rc>\"$(host_WINDRES)\" : ;" > user-config.jam
endef
define $(package)_config_cmds

View File

@@ -1,8 +1,8 @@
package=cppzmq
$(package)_version=4.4.1
$(package)_version=4.2.3
$(package)_download_path=https://github.com/zeromq/cppzmq/archive/
$(package)_file_name=v$($(package)_version).tar.gz
$(package)_sha256_hash=117fc1ca24d98dbe1a60c072cde13be863d429134907797f8e03f654ce679385
$(package)_sha256_hash=3e6b57bf49115f4ae893b1ff7848ead7267013087dc7be1ab27636a97144d373
$(package)_dependencies=zeromq
define $(package)_stage_cmds

View File

@@ -9,7 +9,7 @@ define $(package)_set_vars
endef
define $(package)_config_cmds
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
define $(package)_build_cmd
@@ -23,7 +23,3 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -6,7 +6,6 @@ $(package)_sha256_hash=03ad85db965f8ab2d27328abcf0bc5571af6ec0a414874b2066ee3fdd
define $(package)_set_vars
$(package)_config_opts=--enable-static
$(package)_config_opts=--disable-shared
$(package)_config_opts+=--prefix=$(host_prefix)
endef
@@ -21,8 +20,3 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -1,23 +0,0 @@
package=freebsd_base
$(package)_version=11.3
$(package)_download_path=https://download.freebsd.org/ftp/releases/amd64/$($(package)_version)-RELEASE/
$(package)_download_file=base.txz
$(package)_file_name=freebsd-base-$($(package)_version).txz
$(package)_sha256_hash=4599023ac136325b86f2fddeec64c1624daa83657e40b00b2ef944c81463a4ff
define $(package)_extract_cmds
echo $($(package)_sha256_hash) $($(1)_source_dir)/$($(package)_file_name) | sha256sum -c &&\
tar xf $($(1)_source_dir)/$($(package)_file_name) ./lib/ ./usr/lib/ ./usr/include/
endef
define $(package)_build_cmds
mkdir bin &&\
echo "exec /usr/bin/clang-8 -target x86_64-unknown-freebsd$($(package)_version) --sysroot=$(host_prefix)/native $$$$""@" > bin/clang-8 &&\
echo "exec /usr/bin/clang++-8 -target x86_64-unknown-freebsd$($(package)_version) --sysroot=$(host_prefix)/native $$$$""@" > bin/clang++-8 &&\
chmod 755 bin/*
endef
define $(package)_stage_cmds
mkdir $($(package)_staging_dir)/$(host_prefix)/native &&\
mv bin lib usr $($(package)_staging_dir)/$(host_prefix)/native
endef

View File

@@ -1,8 +1,8 @@
package=hidapi
$(package)_version=0.9.0
$(package)_download_path=https://github.com/libusb/hidapi/archive
$(package)_version=0.8.0-rc1
$(package)_download_path=https://github.com/signal11/hidapi/archive
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=630ee1834bdd5c5761ab079fd04f463a89585df8fcae51a7bfe4229b1e02a652
$(package)_sha256_hash=3c147200bf48a04c1e927cd81589c5ddceff61e6dac137a605f6ac9793f4af61
$(package)_linux_dependencies=libusb eudev
define $(package)_set_vars
@@ -18,7 +18,7 @@ endef
define $(package)_config_cmds
./bootstrap &&\
$($(package)_autoconf) $($(package)_config_opts) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf) $($(package)_config_opts)
endef
define $(package)_build_cmds
@@ -28,8 +28,3 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -1,8 +1,8 @@
package=icu4c
$(package)_version=55.2
$(package)_download_path=https://github.com/unicode-org/icu/releases/download/release-55-2/
$(package)_file_name=$(package)-55_2-src.tgz
$(package)_sha256_hash=eda2aa9f9c787748a2e2d310590720ca8bcc6252adf6b4cfb03b65bef9d66759
$(package)_version=55.1
$(package)_download_path=https://github.com/TheCharlatan/icu4c/archive
$(package)_file_name=55.1.tar.gz
$(package)_sha256_hash=1f912c54035533fb4268809701d65c7468d00e292efbc31e6444908450cc46ef
$(package)_patches=icu-001-dont-build-static-dynamic-twice.patch
define $(package)_set_vars
@@ -21,6 +21,11 @@ define $(package)_config_cmds
$(MAKE) $($(package)_build_opts)
endef
#define $(package)_build_cmds
# cd source &&\
$(MAKE) $($((package)_build_opts) `nproc`
#endef
define $(package)_stage_cmds
cd buildb &&\
$(MAKE) $($(package)_build_opts) DESTDIR=$($(package)_staging_dir) install lib/*

View File

@@ -6,16 +6,12 @@ $(package)_sha256_hash=8b88e059452118e8949a2752a55ce59bc71fa5bc414103e17f5b6b06f
$(package)_dependencies=openssl
define $(package)_set_vars
$(package)_config_opts=--disable-shared --enable-static --with-drill
$(package)_config_opts+=--with-ssl=$(host_prefix)
$(package)_config_opts=--disable-shared --enable-static --disable-dane-ta-usage --with-drill
$(package)_config_opts=--with-ssl=$(host_prefix)
$(package)_config_opts_release=--disable-debug-mode
$(package)_config_opts_linux=--with-pic
endef
define $(package)_preprocess_cmds
cp -f $(BASEDIR)/config.guess $(BASEDIR)/config.sub .
endef
define $(package)_config_cmds
$($(package)_autoconf)
endef
@@ -29,6 +25,4 @@ define $(package)_stage_cmds
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -10,7 +10,6 @@ define $(package)_set_vars
$(package)_config_opts=--enable-static
$(package)_config_opts=--disable-shared
$(package)_config_opts_linux=--with-pic
$(package)_config_opts_freebsd=--with-pic
endef
define $(package)_preprocess_cmds
@@ -19,7 +18,7 @@ define $(package)_preprocess_cmds
endef
define $(package)_config_cmds
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
define $(package)_build_cmds
@@ -29,7 +28,3 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -19,11 +19,11 @@ ifneq ($(host_os),darwin)
define $(package)_config_cmds
cp -f $(BASEDIR)/config.guess config.guess &&\
cp -f $(BASEDIR)/config.sub config.sub &&\
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
else
define $(package)_config_cmds
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
endif

View File

@@ -10,7 +10,6 @@ $(package)_clang_download_file=clang+llvm-$($(package)_clang_version)-x86_64-lin
$(package)_clang_file_name=clang-llvm-$($(package)_clang_version)-x86_64-linux-gnu-ubuntu-14.04.tar.xz
$(package)_clang_sha256_hash=99b28a6b48e793705228a390471991386daa33a9717cd9ca007fcdde69608fd9
$(package)_extra_sources=$($(package)_clang_file_name)
$(package)_patches=skip_otool.patch
define $(package)_fetch_cmds
$(call fetch_file,$(package),$($(package)_download_path),$($(package)_download_file),$($(package)_file_name),$($(package)_sha256_hash)) && \
@@ -38,10 +37,7 @@ $(package)_cc=$($(package)_extract_dir)/toolchain/bin/clang
$(package)_cxx=$($(package)_extract_dir)/toolchain/bin/clang++
endef
# If clang gets updated to a version with a fix for https://reviews.llvm.org/D50559
# then the patch that skips otool can be removed.
define $(package)_preprocess_cmds
patch -p0 < $($(package)_patch_dir)/skip_otool.patch && \
cd $($(package)_build_subdir); ./autogen.sh && \
sed -i.old "/define HAVE_PTHREADS/d" ld64/src/ld/InputFiles.h
endef

View File

@@ -1,64 +0,0 @@
package=ncurses
$(package)_version=6.1
$(package)_download_path=https://ftp.gnu.org/gnu/ncurses
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=aa057eeeb4a14d470101eff4597d5833dcef5965331be3528c08d99cebaa0d17
$(package)_patches=fallback.c
define $(package)_set_vars
$(package)_build_opts=CC="$($(package)_cc)"
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)" ARFLAGS=$($(package)_arflags) cf_cv_ar_flags=""
$(package)_config_env_darwin=RANLIB="$(host_prefix)/native/bin/x86_64-apple-darwin11-ranlib" AR="$(host_prefix)/native/bin/x86_64-apple-darwin11-ar" CC="$(host_prefix)/native/bin/$($(package)_cc)"
$(package)_config_opts=--prefix=$(host_prefix)
$(package)_config_opts+=--disable-shared
$(package)_config_opts+=--with-build-cc=gcc
$(package)_config_opts+=--without-debug
$(package)_config_opts+=--without-ada
$(package)_config_opts+=--without-cxx-binding
$(package)_config_opts+=--without-cxx
$(package)_config_opts+=--without-ticlib
$(package)_config_opts+=--without-tic
$(package)_config_opts+=--without-progs
$(package)_config_opts+=--without-tests
$(package)_config_opts+=--without-tack
$(package)_config_opts+=--without-manpages
$(package)_config_opts+=--with-termlib=tinfo
$(package)_config_opts+=--disable-tic-depends
$(package)_config_opts+=--disable-big-strings
$(package)_config_opts+=--disable-ext-colors
$(package)_config_opts+=--enable-pc-files
$(package)_config_opts+=--host=$(HOST)
$(pacakge)_config_opts+=--without-shared
$(pacakge)_config_opts+=--without-pthread
$(pacakge)_config_opts+=--disable-rpath
$(pacakge)_config_opts+=--disable-colorfgbg
$(pacakge)_config_opts+=--disable-ext-mouse
$(pacakge)_config_opts+=--disable-symlinks
$(pacakge)_config_opts+=--enable-warnings
$(pacakge)_config_opts+=--enable-assertions
$(package)_config_opts+=--with-default-terminfo-dir=/etc/_terminfo_
$(package)_config_opts+=--with-terminfo-dirs=/etc/_terminfo_
$(pacakge)_config_opts+=--enable-database
$(pacakge)_config_opts+=--enable-sp-funcs
$(pacakge)_config_opts+=--disable-term-driver
$(pacakge)_config_opts+=--enable-interop
$(pacakge)_config_opts+=--enable-widec
$(package)_build_opts=CFLAGS="$($(package)_cflags) $($(package)_cppflags) -fPIC"
endef
define $(package)_preprocess_cmds
cp $($(package)_patch_dir)/fallback.c ncurses
endef
define $(package)_config_cmds
./configure $($(package)_config_opts)
endef
define $(package)_build_cmds
$(MAKE) $($(package)_build_opts) V=1
endef
define $(package)_stage_cmds
$(MAKE) install.libs DESTDIR=$($(package)_staging_dir)
endef

View File

@@ -3,10 +3,9 @@ $(package)_version=1.0.2r
$(package)_download_path=https://www.openssl.org/source
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=ae51d08bba8a83958e894946f15303ff894d75c2b8bbd44a852b64e3fe11d0d6
$(package)_patches=fix_arflags.patch
define $(package)_set_vars
$(package)_config_env=AR="$($(package)_ar)" ARFLAGS=$($(package)_arflags) RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)"
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)"
$(package)_config_opts=--prefix=$(host_prefix) --openssldir=$(host_prefix)/etc/openssl
$(package)_config_opts+=no-capieng
$(package)_config_opts+=no-dso
@@ -37,28 +36,21 @@ $(package)_config_opts+=no-zlib
$(package)_config_opts+=no-zlib-dynamic
$(package)_config_opts+=$($(package)_cflags) $($(package)_cppflags)
$(package)_config_opts_linux=-fPIC -Wa,--noexecstack
$(package)_config_opts_freebsd=-fPIC -Wa,--noexecstack
$(package)_config_opts_x86_64_linux=linux-x86_64
$(package)_config_opts_i686_linux=linux-generic32
$(package)_config_opts_arm_linux=linux-generic32
$(package)_config_opts_aarch64_linux=linux-generic64
$(package)_config_opts_arm_android=--static android-armv7 no-asm
$(package)_config_opts_aarch64_android=--static android no-asm
$(package)_config_opts_riscv64_linux=linux-generic64
$(package)_config_opts_mipsel_linux=linux-generic32
$(package)_config_opts_mips_linux=linux-generic32
$(package)_config_opts_powerpc_linux=linux-generic32
$(package)_config_opts_x86_64_darwin=darwin64-x86_64-cc
$(package)_config_opts_x86_64_mingw32=mingw64
$(package)_config_opts_i686_mingw32=mingw
$(package)_config_opts_x86_64_freebsd=BSD-x86_64
endef
define $(package)_preprocess_cmds
sed -i.old "/define DATE/d" util/mkbuildinf.pl && \
sed -i.old "s|engines apps test|engines|" Makefile.org && \
sed -i -e "s/-mandroid //" Configure && \
patch < $($(package)_patch_dir)/fix_arflags.patch
sed -i.old "s|engines apps test|engines|" Makefile.org
endef
define $(package)_config_cmds

View File

@@ -1,34 +1,24 @@
packages:=boost openssl zeromq libiconv
packages:=boost openssl zeromq cppzmq expat ldns readline libiconv hidapi protobuf libusb
native_packages := native_ccache native_protobuf
native_packages := native_ccache
darwin_native_packages = native_biplist native_ds_store native_mac_alias
darwin_packages = sodium-darwin
hardware_packages := hidapi protobuf libusb
hardware_native_packages := native_protobuf
android_native_packages = android_ndk
android_packages = ncurses readline sodium
darwin_native_packages = native_biplist native_ds_store native_mac_alias $(hardware_native_packages)
darwin_packages = sodium ncurses readline $(hardware_packages)
# not really native...
freebsd_native_packages = freebsd_base
freebsd_packages = ncurses readline sodium
linux_packages = eudev ncurses readline sodium $(hardware_packages)
linux_native_packages = $(hardware_native_packages)
linux_packages = eudev
qt_packages = qt
ifeq ($(build_tests),ON)
packages += gtest
endif
ifneq ($(host_arch),riscv64)
linux_packages += unwind
ifeq ($(host_os),linux)
packages += unwind
packages += sodium
endif
ifeq ($(host_os),mingw32)
packages += icu4c
packages += sodium
endif
mingw32_packages = icu4c sodium $(hardware_packages)
mingw32_native_packages = $(hardware_native_packages)
ifneq ($(build_os),darwin)
darwin_native_packages += native_cctools native_cdrkit native_libdmg-hfsplus

View File

@@ -12,7 +12,7 @@ define $(package)_set_vars
endef
define $(package)_config_cmds
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
define $(package)_build_cmds
@@ -25,7 +25,5 @@ define $(package)_stage_cmds
endef
define $(package)_postprocess_cmds
rm lib/libprotoc.a &&\
rm lib/*.la
rm lib/libprotoc.a
endef

View File

@@ -3,22 +3,19 @@ $(package)_version=8.0
$(package)_download_path=https://ftp.gnu.org/gnu/readline
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=e339f51971478d369f8a053a330a190781acb9864cf4c541060f12078948e461
$(package)_dependencies=ncurses
define $(package)_set_vars
$(package)_build_opts=CC="$($(package)_cc)"
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)" LDFLAGS="-L$(host_prefix)/lib" ARFLAGS=$($(package)_arflags)
$(package)_config_env_darwin=RANLIB="$(host_prefix)/native/bin/x86_64-apple-darwin11-ranlib" AR="$(host_prefix)/native/bin/x86_64-apple-darwin11-ar" CC="$(host_prefix)/native/bin/$($(package)_cc)"
$(package)_config_opts+=--prefix=$(host_prefix)
$(package)_config_opts+=--exec-prefix=$(host_prefix)
$(package)_config_opts+=--host=$(HOST)
$(package)_config_opts+=--disable-shared --with-curses
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)"
$(package)_config_opts=--prefix=$(host_prefix)
$(package)_config_opts+=--disable-shared --enable-multibye --without-purify --without-curses
$(package)_config_opts_release=--disable-debug-mode
$(package)_config_opts_darwin+=RANLIB="$(host_prefix)/native/bin/x86_64-apple-darwin11-ranlib" AR="$(host_prefix)/native/bin/x86_64-apple-darwin11-ar" CC="$(host_prefix)/native/bin/$($(package)_cc)"
$(package)_build_opts=CFLAGS="$($(package)_cflags) $($(package)_cppflags) -fPIC"
endef
define $(package)_config_cmds
export bash_cv_have_mbstate_t=yes &&\
export bash_cv_wcwidth_broken=yes &&\
./configure $($(package)_config_opts)
endef
@@ -27,6 +24,6 @@ define $(package)_build_cmds
endef
define $(package)_stage_cmds
$(MAKE) install DESTDIR=$($(package)_staging_dir) prefix=$(host_prefix) exec-prefix=$(host_prefix)
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef

View File

@@ -0,0 +1,25 @@
package=sodium-darwin
$(package)_version=1.0.16
$(package)_download_path=https://download.libsodium.org/libsodium/releases/
$(package)_file_name=libsodium-$($(package)_version).tar.gz
$(package)_sha256_hash=eeadc7e1e1bcef09680fb4837d448fbdf57224978f865ac1c16745868fbd0533
define $(package)_set_vars
$(package)_build_opts_darwin=OS=Darwin LIBTOOL="$($(package)_libtool)"
$(package)_config_opts=--enable-static --disable-shared
$(package)_config_opts+=--prefix=$(host_prefix)
endef
define $(package)_config_cmds
./autogen.sh &&\
$($(package)_autoconf) $($(package)_config_opts) RANLIB="$(host_prefix)/native/bin/x86_64-apple-darwin11-ranlib" AR="$(host_prefix)/native/bin/x86_64-apple-darwin11-ar" CC="$(host_prefix)/native/bin/$($(package)_cc)"
endef
define $(package)_build_cmds
echo "path is problematic here" &&\
make
endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef

View File

@@ -8,14 +8,12 @@ $(package)_patches=fix-whitespace.patch
define $(package)_set_vars
$(package)_config_opts=--enable-static --disable-shared
$(package)_config_opts+=--prefix=$(host_prefix)
$(package)_config_opts_android=RANLIB=$($(package)_ranlib) AR=$($(package)_ar) CC=$($(package)_cc)
$(package)_config_opts_darwin=RANLIB="$(host_prefix)/native/bin/x86_64-apple-darwin11-ranlib" AR="$(host_prefix)/native/bin/x86_64-apple-darwin11-ar" CC="$(host_prefix)/native/bin/$($(package)_cc)"
endef
define $(package)_config_cmds
./autogen.sh &&\
patch -p1 < $($(package)_patch_dir)/fix-whitespace.patch &&\
$($(package)_autoconf) $($(package)_config_opts) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf) $($(package)_config_opts)
endef
define $(package)_build_cmds
@@ -25,8 +23,3 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -3,16 +3,11 @@ $(package)_version=1.2
$(package)_download_path=https://download.savannah.nongnu.org/releases/libunwind
$(package)_file_name=lib$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=1de38ffbdc88bd694d10081865871cd2bfbb02ad8ef9e1606aee18d65532b992
$(package)_patches=fix_obj_order.patch
define $(package)_preprocess_cmds
patch -p0 < $($(package)_patch_dir)/fix_obj_order.patch
endef
define $(package)_config_cmds
cp -f $(BASEDIR)/config.guess config/config.guess &&\
cp -f $(BASEDIR)/config.sub config/config.sub &&\
$($(package)_autoconf) --disable-shared --enable-static AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf) --disable-shared --enable-static
endef
define $(package)_build_cmds
@@ -24,6 +19,4 @@ define $(package)_stage_cmds
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View File

@@ -1,26 +1,24 @@
package=zeromq
$(package)_version=4.1.7
$(package)_version=4.1.5
$(package)_download_path=https://github.com/zeromq/zeromq4-1/releases/download/v$($(package)_version)/
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=31c383cfcd3be1dc8a66e448c403029e793687e70473b89c4cc0bd626e7da299
$(package)_patches=9114d3957725acd34aa8b8d011585812f3369411.patch 9e6745c12e0b100cd38acecc16ce7db02905e27c.patch ffe62d3398d5e0191f554f61049aa7ec9fc892ae.patch
$(package)_sha256_hash=04aac57f081ffa3a2ee5ed04887be9e205df3a7ddade0027460b8042432bdbcf
$(package)_patches=9114d3957725acd34aa8b8d011585812f3369411.patch 9e6745c12e0b100cd38acecc16ce7db02905e27c.patch
define $(package)_set_vars
$(package)_config_opts=--without-documentation --disable-shared --without-libsodium --disable-curve
$(package)_config_opts_linux=--with-pic
$(package)_config_opts_freebsd=--with-pic
$(package)_cxxflags=-std=c++11
endef
define $(package)_preprocess_cmds
patch -p1 < $($(package)_patch_dir)/9114d3957725acd34aa8b8d011585812f3369411.patch && \
patch -p1 < $($(package)_patch_dir)/9e6745c12e0b100cd38acecc16ce7db02905e27c.patch && \
patch -p1 < $($(package)_patch_dir)/ffe62d3398d5e0191f554f61049aa7ec9fc892ae.patch && \
./autogen.sh
endef
define $(package)_config_cmds
$($(package)_autoconf) AR_FLAGS=$($(package)_arflags)
$($(package)_autoconf)
endef
define $(package)_build_cmds
@@ -32,7 +30,5 @@ define $(package)_stage_cmds
endef
define $(package)_postprocess_cmds
rm -rf bin share &&\
rm lib/*.la
rm -rf bin share
endef

View File

@@ -1,28 +0,0 @@
--- boost_1_64_0/tools/build/src/tools/gcc.jam.O 2017-04-17 03:22:26.000000000 +0100
+++ boost_1_64_0/tools/build/src/tools/gcc.jam 2019-11-15 15:46:16.957937137 +0000
@@ -243,6 +243,8 @@
{
ECHO notice: using gcc archiver :: $(condition) :: $(archiver[1]) ;
}
+ local arflags = [ feature.get-values <arflags> : $(options) ] ;
+ toolset.flags gcc.archive .ARFLAGS $(condition) : $(arflags) ;
# - Ranlib.
local ranlib = [ common.get-invocation-command gcc
@@ -970,6 +972,7 @@
# logic in intel-linux, but that is hardly worth the trouble as on Linux, 'ar'
# is always available.
.AR = ar ;
+.ARFLAGS = rc ;
.RANLIB = ranlib ;
toolset.flags gcc.archive AROPTIONS <archiveflags> ;
@@ -1011,7 +1014,7 @@
#
actions piecemeal archive
{
- "$(.AR)" $(AROPTIONS) rc "$(<)" "$(>)"
+ "$(.AR)" $(AROPTIONS) $(.ARFLAGS) "$(<)" "$(>)"
"$(.RANLIB)" "$(<)"
}

View File

@@ -1,12 +0,0 @@
--- cctools/Makefile.am.O 2016-06-09 15:06:16.000000000 +0100
+++ cctools/Makefile.am 2019-11-18 08:59:20.078663220 +0000
@@ -1,7 +1,7 @@
if ISDARWIN
-SUBDIRS=libstuff ar as misc otool ld64 $(LD_CLASSIC)
+SUBDIRS=libstuff ar as misc ld64 $(LD_CLASSIC)
else
-SUBDIRS=libstuff ar as misc libobjc2 otool ld64 $(LD_CLASSIC)
+SUBDIRS=libstuff ar as misc ld64 $(LD_CLASSIC)
endif
ACLOCAL_AMFLAGS = -I m4

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +0,0 @@
--- Makefile.org.O 2019-02-26 14:20:20.000000000 +0000
+++ Makefile.org 2019-11-15 13:05:54.370086856 +0000
@@ -63,8 +63,8 @@
PEX_LIBS=
EX_LIBS=
EXE_EXT=
-ARFLAGS=
-AR=ar $(ARFLAGS) r
+ARFLAGS= r
+AR=ar $(ARFLAGS)
RANLIB= ranlib
RC= windres
NM= nm
--- Configure.O 2019-02-26 14:20:20.000000000 +0000
+++ Configure 2019-11-16 07:43:14.933990774 +0000
@@ -1251,7 +1251,7 @@
my $shared_extension = $fields[$idx_shared_extension];
my $ranlib = $ENV{'RANLIB'} || $fields[$idx_ranlib];
my $ar = $ENV{'AR'} || "ar";
-my $arflags = $fields[$idx_arflags];
+my $arflags = $ENV{'ARFLAGS'} || $fields[$idx_arflags];
my $windres = $ENV{'RC'} || $ENV{'WINDRES'} || "windres";
my $multilib = $fields[$idx_multilib];

View File

@@ -1,11 +0,0 @@
--- config/ltmain.sh.O 2017-01-13 16:00:54.000000000 +0000
+++ config/ltmain.sh 2019-11-17 06:46:51.994402494 +0000
@@ -7957,6 +7957,8 @@
esac
done
fi
+ oldobjs=`for obj in $oldobjs; do echo $obj; done | sort`
+ oldobjs=" `echo $oldobjs`"
eval cmds=\"$old_archive_cmds\"
func_len " $cmds"

View File

@@ -1,38 +0,0 @@
From ffe62d3398d5e0191f554f61049aa7ec9fc892ae Mon Sep 17 00:00:00 2001
From: Gregory Lemercier <greglemercier@free.fr>
Date: Sun, 7 Oct 2018 18:06:54 +0200
Subject: [PATCH] Fix build on arm64 architectures with some strict compilers
This patch fixes an issue that occurs on 64-bit architetures under
strict compiler rules. The code initially checked that the received
size stored in 'uint64_t' was not bigger than the max value of a
'size_t' variable, which is legitimate on 32-bit architectures where
'size_t' variables are stored on 32 bits. On 64-bit architectures,
this test no longer makes sense since 'uint64_t' and 'size_t' types
have the same size. The issue is fixed by ignoring this portion
of code when built for arm64.
---
src/v1_decoder.cpp | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/v1_decoder.cpp b/src/v1_decoder.cpp
index b002dc9d..2c8c97a7 100644
--- a/src/v1_decoder.cpp
+++ b/src/v1_decoder.cpp
@@ -114,11 +114,13 @@ int zmq::v1_decoder_t::eight_byte_size_ready ()
return -1;
}
+#ifndef __aarch64__
// Message size must fit within range of size_t data type.
if (payload_length - 1 > std::numeric_limits <size_t>::max ()) {
errno = EMSGSIZE;
return -1;
}
+#endif
const size_t msg_size = static_cast <size_t> (payload_length - 1);
--
2.20.1

View File

@@ -1,4 +1,4 @@
# Set the system name to one of Android, Darwin, FreeBSD, Linux, or Windows
# Set the system name, either Darwin, Linux, or Windows
SET(CMAKE_SYSTEM_NAME @depends@)
SET(CMAKE_BUILD_TYPE @release_type@)
@@ -18,19 +18,14 @@ SET(CMAKE_FIND_ROOT_PATH @prefix@ /usr)
SET(ENV{PKG_CONFIG_PATH} @prefix@/lib/pkgconfig)
SET(Readline_ROOT_DIR @prefix@)
SET(Readline_INCLUDE_DIR @prefix@/include)
SET(Readline_LIBRARY @prefix@/lib/libreadline.a)
SET(Terminfo_LIBRARY @prefix@/lib/libtinfo.a)
SET(LRELEASE_PATH @prefix@/native/bin CACHE FILEPATH "path to lrelease" FORCE)
if(NOT CMAKE_SYSTEM_NAME STREQUAL "Android")
SET(Readline_ROOT_DIR @prefix@)
SET(LIBUNWIND_INCLUDE_DIR @prefix@/include)
SET(LIBUNWIND_LIBRARIES @prefix@/lib/libunwind.a)
SET(LIBUNWIND_LIBRARY_DIRS @prefix@/lib)
if(NOT CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
SET(LIBUSB-1.0_LIBRARY @prefix@/lib/libusb-1.0.a)
SET(LIBUDEV_LIBRARY @prefix@/lib/libudev.a)
@@ -39,21 +34,18 @@ SET(Protobuf_PROTOC_EXECUTABLE @prefix@/native/bin/protoc CACHE FILEPATH "Path t
SET(Protobuf_INCLUDE_DIR @prefix@/include CACHE PATH "Protobuf include dir")
SET(Protobuf_INCLUDE_DIRS @prefix@/include CACHE PATH "Protobuf include dir")
SET(Protobuf_LIBRARY @prefix@/lib/libprotobuf.a CACHE FILEPATH "Protobuf library")
endif()
endif()
SET(ZMQ_INCLUDE_PATH @prefix@/include)
SET(ZMQ_LIB @prefix@/lib/libzmq.a)
SET(Boost_IGNORE_SYSTEM_PATH ON)
SET(BOOST_IGNORE_SYSTEM_PATHS_DEFAULT ON)
SET(BOOST_IGNORE_SYSTEM_PATH ON)
SET(BOOST_ROOT @prefix@)
SET(BOOST_INCLUDEDIR @prefix@/include)
SET(BOOST_LIBRARYDIR @prefix@/lib)
SET(Boost_IGNORE_SYSTEM_PATHS_DEFAULT OFF)
SET(Boost_NO_SYSTEM_PATHS ON)
SET(Boost_USE_STATIC_LIBS ON)
SET(Boost_USE_STATIC_RUNTIME ON)
SET(BOOST_IGNORE_SYSTEM_PATHS_DEFAULT OFF)
SET(BOOST_NO_SYSTEM_PATHS TRUE)
SET(BOOST_USE_STATIC_LIBS TRUE)
SET(BOOST_USE_STATIC_RUNTIME TRUE)
SET(OpenSSL_DIR @prefix@/lib)
SET(ARCHITECTURE @arch@)
@@ -83,22 +75,6 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
SET(CMAKE_OSX_ARCHITECTURES "x86_64")
SET(LLVM_ENABLE_PIC OFF)
SET(LLVM_ENABLE_PIE OFF)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Android")
SET(ANDROID TRUE)
if(ARCHITECTURE STREQUAL "arm")
SET(CMAKE_ANDROID_ARCH_ABI "armeabi-v7a")
SET(CMAKE_SYSTEM_PROCESSOR "armv7-a")
SET(CMAKE_ANDROID_ARM_MODE ON)
SET(CMAKE_C_COMPILER_TARGET arm-linux-androideabi)
SET(CMAKE_CXX_COMPILER_TARGET arm-linux-androideabi)
SET(_CMAKE_TOOLCHAIN_PREFIX arm-linux-androideabi-)
elseif(ARCHITECTURE STREQUAL "aarch64")
SET(CMAKE_ANDROID_ARCH_ABI "arm64-v8a")
SET(CMAKE_SYSTEM_PROCESSOR "aarch64")
endif()
SET(CMAKE_ANDROID_STANDALONE_TOOLCHAIN @prefix@/native)
SET(CMAKE_C_COMPILER "${_CMAKE_TOOLCHAIN_PREFIX}clang")
SET(CMAKE_CXX_COMPILER "${_CMAKE_TOOLCHAIN_PREFIX}clang++")
else()
SET(CMAKE_C_COMPILER @CC@)
SET(CMAKE_CXX_COMPILER @CXX@)
@@ -110,37 +86,22 @@ if(ARCHITECTURE STREQUAL "arm")
set(ARM_ID "armv7-a")
set(BUILD_64 OFF)
set(CMAKE_BUILD_TYPE release)
if(ANDROID)
set(BUILD_TAG "android-armv7")
else()
set(BUILD_TAG "linux-armv7")
endif()
set(BUILD_TAG "linux-armv7")
set(ARM7)
elseif(ARCHITECTURE STREQUAL "aarch64")
set(ARCH "armv8-a")
set(ARM ON)
set(ARM_ID "armv8-a")
if(ANDROID)
set(BUILD_TAG "android-armv8")
else()
set(BUILD_TAG "linux-armv8")
endif()
set(BUILD_TAG "linux-armv8")
set(BUILD_64 ON)
endif()
if(ARCHITECTURE STREQUAL "riscv64")
set(NO_AES ON)
set(ARCH "rv64imafdc")
endif()
if(ARCHITECTURE STREQUAL "i686")
if(ARCHITECTURE STREQUAL "i686" AND CMAKE_SYSTEM_NAME STREQUAL "Linux")
SET(LINUX_32 ON)
SET(ARCH_ID "i386")
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
SET(LINUX_32 ON)
endif()
endif()
if(ARCHITECTURE STREQUAL "x86_64")
if(ARCHITECTURE STREQUAL "x86_64" AND CMAKE_SYSTEM_NAME STREQUAL "Linux")
SET(ARCH_ID "x86_64")
endif()

View File

@@ -1,145 +0,0 @@
// Copyright (c) 2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#pragma once
#include <cstddef>
#include <cstdint>
#include <memory>
#include <string>
#include <vector>
#include "span.h"
namespace epee
{
struct byte_slice_data;
struct release_byte_slice
{
void operator()(byte_slice_data*) const noexcept;
};
/*! Inspired by slices in golang. Storage is thread-safe reference counted,
allowing for cheap copies or range selection on the bytes. The bytes
owned by this class are always immutable.
The functions `operator=`, `take_slice` and `remove_prefix` may alter the
reference count for the backing store, which will invalidate pointers
previously returned if the reference count is zero. Be careful about
"caching" pointers in these circumstances. */
class byte_slice
{
/* A custom reference count is used instead of shared_ptr because it allows
for an allocation optimization for the span constructor. This also
reduces the size of this class by one pointer. */
std::unique_ptr<byte_slice_data, release_byte_slice> storage_;
span<const std::uint8_t> portion_; // within storage_
//! Internal use only; use to increase `storage` reference count.
byte_slice(byte_slice_data* storage, span<const std::uint8_t> portion) noexcept;
struct adapt_buffer{};
template<typename T>
explicit byte_slice(const adapt_buffer, T&& buffer);
public:
using value_type = std::uint8_t;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using pointer = const std::uint8_t*;
using const_pointer = const std::uint8_t*;
using reference = std::uint8_t;
using const_reference = std::uint8_t;
using iterator = pointer;
using const_iterator = const_pointer;
//! Construct empty slice.
byte_slice() noexcept
: storage_(nullptr), portion_()
{}
//! Construct empty slice
byte_slice(std::nullptr_t) noexcept
: byte_slice()
{}
//! Scatter-gather (copy) multiple `sources` into a single allocated slice.
explicit byte_slice(std::initializer_list<span<const std::uint8_t>> sources);
//! Convert `buffer` into a slice using one allocation for shared count.
explicit byte_slice(std::vector<std::uint8_t>&& buffer);
//! Convert `buffer` into a slice using one allocation for shared count.
explicit byte_slice(std::string&& buffer);
byte_slice(byte_slice&& source) noexcept;
~byte_slice() noexcept = default;
//! \note May invalidate previously retrieved pointers.
byte_slice& operator=(byte_slice&&) noexcept;
//! \return A shallow (cheap) copy of the data from `this` slice.
byte_slice clone() const noexcept { return {storage_.get(), portion_}; }
iterator begin() const noexcept { return portion_.begin(); }
const_iterator cbegin() const noexcept { return portion_.begin(); }
iterator end() const noexcept { return portion_.end(); }
const_iterator cend() const noexcept { return portion_.end(); }
bool empty() const noexcept { return storage_ == nullptr; }
const std::uint8_t* data() const noexcept { return portion_.data(); }
std::size_t size() const noexcept { return portion_.size(); }
/*! Drop bytes from the beginning of `this` slice.
\note May invalidate previously retrieved pointers.
\post `this->size() = this->size() - std::min(this->size(), max_bytes)`
\post `if (this->size() <= max_bytes) this->data() = nullptr`
\return Number of bytes removed. */
std::size_t remove_prefix(std::size_t max_bytes) noexcept;
/*! "Take" bytes from the beginning of `this` slice.
\note May invalidate previously retrieved pointers.
\post `this->size() = this->size() - std::min(this->size(), max_bytes)`
\post `if (this->size() <= max_bytes) this->data() = nullptr`
\return Slice containing the bytes removed from `this` slice. */
byte_slice take_slice(std::size_t max_bytes) noexcept;
/*! Return a shallow (cheap) copy of a slice from `begin` and `end` offsets.
\throw std::out_of_range If `end < begin`.
\throw std::out_of_range If `size() < end`.
\return Slice starting at `data() + begin` of size `end - begin`. */
byte_slice get_slice(std::size_t begin, std::size_t end) const;
};
} // epee

View File

@@ -45,9 +45,6 @@
#include "readline_buffer.h"
#endif
#undef MONERO_DEFAULT_LOG_CATEGORY
#define MONERO_DEFAULT_LOG_CATEGORY "console_handler"
namespace epee
{
class async_stdin_reader
@@ -99,7 +96,7 @@ namespace epee
res = true;
}
if (!eos() && m_read_status != state_cancelled)
if (!eos())
m_read_status = state_init;
return res;
@@ -125,14 +122,6 @@ namespace epee
}
}
void cancel()
{
boost::unique_lock<boost::mutex> lock(m_response_mutex);
m_read_status = state_cancelled;
m_has_read_request = false;
m_response_cv.notify_one();
}
private:
bool start_read()
{
@@ -173,9 +162,6 @@ namespace epee
while (m_run.load(std::memory_order_relaxed))
{
if (m_read_status == state_cancelled)
return false;
fd_set read_set;
FD_ZERO(&read_set);
FD_SET(stdin_fileno, &read_set);
@@ -193,9 +179,6 @@ namespace epee
#else
while (m_run.load(std::memory_order_relaxed))
{
if (m_read_status == state_cancelled)
return false;
int retval = ::WaitForSingleObject(::GetStdHandle(STD_INPUT_HANDLE), 100);
switch (retval)
{
@@ -236,8 +219,7 @@ reread:
case rdln::full: break;
}
#else
if (m_read_status != state_cancelled)
std::getline(std::cin, line);
std::getline(std::cin, line);
#endif
read_ok = !std::cin.eof() && !std::cin.fail();
}
@@ -321,7 +303,7 @@ eof:
template<class chain_handler>
bool run(chain_handler ch_handler, std::function<std::string(void)> prompt, const std::string& usage = "", std::function<void(void)> exit_handler = NULL)
{
return run(prompt, usage, [&](const boost::optional<std::string>& cmd) { return ch_handler(cmd); }, exit_handler);
return run(prompt, usage, [&](const std::string& cmd) { return ch_handler(cmd); }, exit_handler);
}
void stop()
@@ -330,12 +312,6 @@ eof:
m_stdin_reader.stop();
}
void cancel()
{
m_cancel = true;
m_stdin_reader.cancel();
}
void print_prompt()
{
std::string prompt = m_prompt();
@@ -384,23 +360,18 @@ eof:
std::cout << std::endl;
break;
}
if (m_cancel)
{
MDEBUG("Input cancelled");
cmd_handler(boost::none);
m_cancel = false;
continue;
}
if (!get_line_ret)
{
MERROR("Failed to read line.");
}
string_tools::trim(command);
LOG_PRINT_L2("Read command: " << command);
if(cmd_handler(command))
if (command.empty())
{
continue;
}
else if(cmd_handler(command))
{
continue;
}
@@ -430,7 +401,6 @@ eof:
private:
async_stdin_reader m_stdin_reader;
std::atomic<bool> m_running = {true};
std::atomic<bool> m_cancel = {false};
std::function<std::string(void)> m_prompt;
};
@@ -512,16 +482,8 @@ eof:
class command_handler {
public:
typedef boost::function<bool (const std::vector<std::string> &)> callback;
typedef boost::function<bool (void)> empty_callback;
typedef std::map<std::string, std::pair<callback, std::pair<std::string, std::string>>> lookup;
command_handler():
m_unknown_command_handler([](const std::vector<std::string>&){return false;}),
m_empty_command_handler([](){return true;}),
m_cancel_handler([](){return true;})
{
}
std::string get_usage()
{
std::stringstream ss;
@@ -554,45 +516,25 @@ eof:
#endif
}
void set_unknown_command_handler(const callback& hndlr)
{
m_unknown_command_handler = hndlr;
}
void set_empty_command_handler(const empty_callback& hndlr)
{
m_empty_command_handler = hndlr;
}
void set_cancel_handler(const empty_callback& hndlr)
{
m_cancel_handler = hndlr;
}
bool process_command_vec(const std::vector<std::string>& cmd)
{
if(!cmd.size() || (cmd.size() == 1 && !cmd[0].size()))
return m_empty_command_handler();
if(!cmd.size())
return false;
auto it = m_command_handlers.find(cmd.front());
if(it == m_command_handlers.end())
return m_unknown_command_handler(cmd);
return false;
std::vector<std::string> cmd_local(cmd.begin()+1, cmd.end());
return it->second.first(cmd_local);
}
bool process_command_str(const boost::optional<std::string>& cmd)
bool process_command_str(const std::string& cmd)
{
if (!cmd)
return m_cancel_handler();
std::vector<std::string> cmd_v;
boost::split(cmd_v,*cmd,boost::is_any_of(" "), boost::token_compress_on);
boost::split(cmd_v,cmd,boost::is_any_of(" "), boost::token_compress_on);
return process_command_vec(cmd_v);
}
private:
lookup m_command_handlers;
callback m_unknown_command_handler;
empty_callback m_empty_command_handler;
empty_callback m_cancel_handler;
};
/************************************************************************/
@@ -630,11 +572,6 @@ eof:
{
m_console_handler.print_prompt();
}
void cancel_input()
{
m_console_handler.cancel();
}
};
///* work around because of broken boost bind */

View File

@@ -129,32 +129,9 @@ static inline uint32_t div128_32(uint64_t dividend_hi, uint64_t dividend_lo, uin
return remainder;
}
// Long divisor with 2^64 base
void div128_64(uint64_t dividend_hi, uint64_t dividend_lo, uint64_t divisor, uint64_t* quotient_hi, uint64_t *quotient_lo, uint64_t *remainder_hi, uint64_t *remainder_lo);
static inline void add64clamp(uint64_t *value, uint64_t add)
{
static const uint64_t maxval = (uint64_t)-1;
if (*value > maxval - add)
*value = maxval;
else
*value += add;
}
static inline void sub64clamp(uint64_t *value, uint64_t sub)
{
if (*value < sub)
*value = 0;
else
*value -= sub;
}
#define IDENT16(x) ((uint16_t) (x))
#define IDENT32(x) ((uint32_t) (x))
#define IDENT64(x) ((uint64_t) (x))
#define SWAP16(x) ((((uint16_t) (x) & 0x00ff) << 8) | \
(((uint16_t) (x) & 0xff00) >> 8))
#define SWAP32(x) ((((uint32_t) (x) & 0x000000ff) << 24) | \
(((uint32_t) (x) & 0x0000ff00) << 8) | \
(((uint32_t) (x) & 0x00ff0000) >> 8) | \
@@ -168,18 +145,10 @@ static inline void sub64clamp(uint64_t *value, uint64_t sub)
(((uint64_t) (x) & 0x00ff000000000000) >> 40) | \
(((uint64_t) (x) & 0xff00000000000000) >> 56))
static inline uint16_t ident16(uint16_t x) { return x; }
static inline uint32_t ident32(uint32_t x) { return x; }
static inline uint64_t ident64(uint64_t x) { return x; }
#ifndef __OpenBSD__
# if defined(__ANDROID__) && defined(__swap16) && !defined(swap16)
# define swap16 __swap16
# elif !defined(swap16)
static inline uint16_t swap16(uint16_t x) {
return ((x & 0x00ff) << 8) | ((x & 0xff00) >> 8);
}
# endif
# if defined(__ANDROID__) && defined(__swap32) && !defined(swap32)
# define swap32 __swap32
# elif !defined(swap32)
@@ -207,12 +176,6 @@ static inline uint64_t swap64(uint64_t x) {
static inline void mem_inplace_ident(void *mem UNUSED, size_t n UNUSED) { }
#undef UNUSED
static inline void mem_inplace_swap16(void *mem, size_t n) {
size_t i;
for (i = 0; i < n; i++) {
((uint16_t *) mem)[i] = swap16(((const uint16_t *) mem)[i]);
}
}
static inline void mem_inplace_swap32(void *mem, size_t n) {
size_t i;
for (i = 0; i < n; i++) {
@@ -226,9 +189,6 @@ static inline void mem_inplace_swap64(void *mem, size_t n) {
}
}
static inline void memcpy_ident16(void *dst, const void *src, size_t n) {
memcpy(dst, src, 2 * n);
}
static inline void memcpy_ident32(void *dst, const void *src, size_t n) {
memcpy(dst, src, 4 * n);
}
@@ -236,12 +196,6 @@ static inline void memcpy_ident64(void *dst, const void *src, size_t n) {
memcpy(dst, src, 8 * n);
}
static inline void memcpy_swap16(void *dst, const void *src, size_t n) {
size_t i;
for (i = 0; i < n; i++) {
((uint16_t *) dst)[i] = swap16(((const uint16_t *) src)[i]);
}
}
static inline void memcpy_swap32(void *dst, const void *src, size_t n) {
size_t i;
for (i = 0; i < n; i++) {
@@ -266,14 +220,6 @@ static_assert(false, "BYTE_ORDER is undefined. Perhaps, GNU extensions are not e
#endif
#if BYTE_ORDER == LITTLE_ENDIAN
#define SWAP16LE IDENT16
#define SWAP16BE SWAP16
#define swap16le ident16
#define swap16be swap16
#define mem_inplace_swap16le mem_inplace_ident
#define mem_inplace_swap16be mem_inplace_swap16
#define memcpy_swap16le memcpy_ident16
#define memcpy_swap16be memcpy_swap16
#define SWAP32LE IDENT32
#define SWAP32BE SWAP32
#define swap32le ident32
@@ -293,14 +239,6 @@ static_assert(false, "BYTE_ORDER is undefined. Perhaps, GNU extensions are not e
#endif
#if BYTE_ORDER == BIG_ENDIAN
#define SWAP16BE IDENT16
#define SWAP16LE SWAP16
#define swap16be ident16
#define swap16le swap16
#define mem_inplace_swap16be mem_inplace_ident
#define mem_inplace_swap16le mem_inplace_swap16
#define memcpy_swap16be memcpy_ident16
#define memcpy_swap16le memcpy_swap16
#define SWAP32BE IDENT32
#define SWAP32LE SWAP32
#define swap32be ident32

View File

@@ -32,7 +32,6 @@
#include <list>
#include <numeric>
#include <random>
#include <boost/timer/timer.hpp>
#include <boost/uuid/uuid.hpp>
#include <boost/uuid/random_generator.hpp>
@@ -231,7 +230,7 @@ namespace math_helper
}
}
template<typename get_interval, bool start_immediate = true>
template<uint64_t scale, int default_interval, bool start_immediate = true>
class once_a_time
{
uint64_t get_time() const
@@ -252,25 +251,14 @@ namespace math_helper
#endif
}
void set_next_interval()
{
m_interval = get_interval()();
}
public:
once_a_time()
once_a_time():m_interval(default_interval * scale)
{
m_last_worked_time = 0;
if(!start_immediate)
m_last_worked_time = get_time();
set_next_interval();
}
void trigger()
{
m_last_worked_time = 0;
}
template<class functor_t>
bool do_call(functor_t functr)
{
@@ -280,7 +268,6 @@ namespace math_helper
{
bool res = functr();
m_last_worked_time = get_time();
set_next_interval();
return res;
}
return true;
@@ -291,13 +278,9 @@ namespace math_helper
uint64_t m_interval;
};
template<uint64_t N> struct get_constant_interval { public: uint64_t operator()() const { return N; } };
template<int default_interval, bool start_immediate = true>
class once_a_time_seconds: public once_a_time<get_constant_interval<default_interval * (uint64_t)1000000>, start_immediate> {};
class once_a_time_seconds: public once_a_time<1000000, default_interval, start_immediate> {};
template<int default_interval, bool start_immediate = true>
class once_a_time_milliseconds: public once_a_time<get_constant_interval<default_interval * (uint64_t)1000>, start_immediate> {};
template<typename get_interval, bool start_immediate = true>
class once_a_time_seconds_range: public once_a_time<get_interval, start_immediate> {};
class once_a_time_milliseconds: public once_a_time<1000, default_interval, start_immediate> {};
}
}

View File

@@ -28,8 +28,6 @@
#ifndef _MISC_LOG_EX_H_
#define _MISC_LOG_EX_H_
#ifdef __cplusplus
#include <string>
#include "easylogging++.h"
@@ -40,29 +38,29 @@
#define MAX_LOG_FILE_SIZE 104850000 // 100 MB - 7600 bytes
#define MAX_LOG_FILES 50
#define MCLOG_TYPE(level, cat, color, type, x) do { \
#define MCLOG_TYPE(level, cat, type, x) do { \
if (ELPP->vRegistry()->allowed(level, cat)) { \
el::base::Writer(level, color, __FILE__, __LINE__, ELPP_FUNC, type).construct(cat) << x; \
el::base::Writer(level, __FILE__, __LINE__, ELPP_FUNC, type).construct(cat) << x; \
} \
} while (0)
#define MCLOG(level, cat, color, x) MCLOG_TYPE(level, cat, color, el::base::DispatchAction::NormalLog, x)
#define MCLOG_FILE(level, cat, x) MCLOG_TYPE(level, cat, el::Color::Default, el::base::DispatchAction::FileOnlyLog, x)
#define MCLOG(level, cat, x) MCLOG_TYPE(level, cat, el::base::DispatchAction::NormalLog, x)
#define MCLOG_FILE(level, cat, x) MCLOG_TYPE(level, cat, el::base::DispatchAction::FileOnlyLog, x)
#define MCFATAL(cat,x) MCLOG(el::Level::Fatal,cat, el::Color::Default, x)
#define MCERROR(cat,x) MCLOG(el::Level::Error,cat, el::Color::Default, x)
#define MCWARNING(cat,x) MCLOG(el::Level::Warning,cat, el::Color::Default, x)
#define MCINFO(cat,x) MCLOG(el::Level::Info,cat, el::Color::Default, x)
#define MCDEBUG(cat,x) MCLOG(el::Level::Debug,cat, el::Color::Default, x)
#define MCTRACE(cat,x) MCLOG(el::Level::Trace,cat, el::Color::Default, x)
#define MCFATAL(cat,x) MCLOG(el::Level::Fatal,cat, x)
#define MCERROR(cat,x) MCLOG(el::Level::Error,cat, x)
#define MCWARNING(cat,x) MCLOG(el::Level::Warning,cat, x)
#define MCINFO(cat,x) MCLOG(el::Level::Info,cat, x)
#define MCDEBUG(cat,x) MCLOG(el::Level::Debug,cat, x)
#define MCTRACE(cat,x) MCLOG(el::Level::Trace,cat, x)
#define MCLOG_COLOR(level,cat,color,x) MCLOG(level,cat,color,x)
#define MCLOG_RED(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Red,x)
#define MCLOG_GREEN(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Green,x)
#define MCLOG_YELLOW(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Yellow,x)
#define MCLOG_BLUE(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Blue,x)
#define MCLOG_MAGENTA(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Magenta,x)
#define MCLOG_CYAN(level,cat,x) MCLOG_COLOR(level,cat,el::Color::Cyan,x)
#define MCLOG_COLOR(level,cat,color,x) MCLOG(level,cat,"\033[1;" color "m" << x << "\033[0m")
#define MCLOG_RED(level,cat,x) MCLOG_COLOR(level,cat,"31",x)
#define MCLOG_GREEN(level,cat,x) MCLOG_COLOR(level,cat,"32",x)
#define MCLOG_YELLOW(level,cat,x) MCLOG_COLOR(level,cat,"33",x)
#define MCLOG_BLUE(level,cat,x) MCLOG_COLOR(level,cat,"34",x)
#define MCLOG_MAGENTA(level,cat,x) MCLOG_COLOR(level,cat,"35",x)
#define MCLOG_CYAN(level,cat,x) MCLOG_COLOR(level,cat,"36",x)
#define MLOG_RED(level,x) MCLOG_RED(level,MONERO_DEFAULT_LOG_CATEGORY,x)
#define MLOG_GREEN(level,x) MCLOG_GREEN(level,MONERO_DEFAULT_LOG_CATEGORY,x)
@@ -77,7 +75,7 @@
#define MINFO(x) MCINFO(MONERO_DEFAULT_LOG_CATEGORY,x)
#define MDEBUG(x) MCDEBUG(MONERO_DEFAULT_LOG_CATEGORY,x)
#define MTRACE(x) MCTRACE(MONERO_DEFAULT_LOG_CATEGORY,x)
#define MLOG(level,x) MCLOG(level,MONERO_DEFAULT_LOG_CATEGORY,el::Color::Default,x)
#define MLOG(level,x) MCLOG(level,MONERO_DEFAULT_LOG_CATEGORY,x)
#define MGINFO(x) MCINFO("global",x)
#define MGINFO_RED(x) MCLOG_RED(el::Level::Info, "global",x)
@@ -87,14 +85,14 @@
#define MGINFO_MAGENTA(x) MCLOG_MAGENTA(el::Level::Info, "global",x)
#define MGINFO_CYAN(x) MCLOG_CYAN(el::Level::Info, "global",x)
#define IFLOG(level, cat, color, type, init, x) \
#define IFLOG(level, cat, type, init, x) \
do { \
if (ELPP->vRegistry()->allowed(level, cat)) { \
init; \
el::base::Writer(level, color, __FILE__, __LINE__, ELPP_FUNC, type).construct(cat) << x; \
el::base::Writer(level, __FILE__, __LINE__, ELPP_FUNC, type).construct(cat) << x; \
} \
} while(0)
#define MIDEBUG(init, x) IFLOG(el::Level::Debug, MONERO_DEFAULT_LOG_CATEGORY, el::Color::Default, el::base::DispatchAction::NormalLog, init, x)
#define MIDEBUG(init, x) IFLOG(el::Level::Debug, MONERO_DEFAULT_LOG_CATEGORY, el::base::DispatchAction::NormalLog, init, x)
#define LOG_ERROR(x) MERROR(x)
@@ -222,28 +220,4 @@ void set_console_color(int color, bool bright);
void reset_console_color();
}
extern "C"
{
#endif
#ifdef __GNUC__
#define ATTRIBUTE_PRINTF __attribute__((format(printf, 2, 3)))
#else
#define ATTRIBUTE_PRINTF
#endif
bool merror(const char *category, const char *format, ...) ATTRIBUTE_PRINTF;
bool mwarning(const char *category, const char *format, ...) ATTRIBUTE_PRINTF;
bool minfo(const char *category, const char *format, ...) ATTRIBUTE_PRINTF;
bool mdebug(const char *category, const char *format, ...) ATTRIBUTE_PRINTF;
bool mtrace(const char *category, const char *format, ...) ATTRIBUTE_PRINTF;
#ifdef __cplusplus
}
#endif
#endif //_MISC_LOG_EX_H_

View File

@@ -49,12 +49,10 @@
#include <boost/asio/ssl.hpp>
#include <boost/array.hpp>
#include <boost/noncopyable.hpp>
#include <boost/shared_ptr.hpp> //! \TODO Convert to std::shared_ptr
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/interprocess/detail/atomic.hpp>
#include <boost/thread/thread.hpp>
#include <memory>
#include "byte_slice.h"
#include "net_utils_base.h"
#include "syncobj.h"
#include "connection_basic.hpp"
@@ -72,7 +70,7 @@ namespace net_utils
struct i_connection_filter
{
virtual bool is_remote_host_allowed(const epee::net_utils::network_address &address, time_t *t = NULL)=0;
virtual bool is_remote_host_allowed(const epee::net_utils::network_address &address)=0;
protected:
virtual ~i_connection_filter(){}
};
@@ -92,24 +90,25 @@ namespace net_utils
public:
typedef typename t_protocol_handler::connection_context t_connection_context;
struct shared_state : connection_basic_shared_state, t_protocol_handler::config_type
struct shared_state : connection_basic_shared_state
{
shared_state()
: connection_basic_shared_state(), t_protocol_handler::config_type(), pfilter(nullptr), stop_signal_sent(false)
: connection_basic_shared_state(), pfilter(nullptr), config(), stop_signal_sent(false)
{}
i_connection_filter* pfilter;
typename t_protocol_handler::config_type config;
bool stop_signal_sent;
};
/// Construct a connection with the given io_service.
explicit connection( boost::asio::io_service& io_service,
std::shared_ptr<shared_state> state,
boost::shared_ptr<shared_state> state,
t_connection_type connection_type,
epee::net_utils::ssl_support_t ssl_support);
explicit connection( boost::asio::ip::tcp::socket&& sock,
std::shared_ptr<shared_state> state,
boost::shared_ptr<shared_state> state,
t_connection_type connection_type,
epee::net_utils::ssl_support_t ssl_support);
@@ -136,7 +135,8 @@ namespace net_utils
private:
//----------------- i_service_endpoint ---------------------
virtual bool do_send(byte_slice message); ///< (see do_send from i_service_endpoint)
virtual bool do_send(const void* ptr, size_t cb); ///< (see do_send from i_service_endpoint)
virtual bool do_send_chunk(const void* ptr, size_t cb); ///< will send (or queue) a part of data
virtual bool send_done();
virtual bool close();
virtual bool call_run_once_service_io();
@@ -145,8 +145,6 @@ namespace net_utils
virtual bool add_ref();
virtual bool release();
//------------------------------------------------------
bool do_send_chunk(byte_slice chunk); ///< will send (or queue) a part of data. internal use only
boost::shared_ptr<connection<t_protocol_handler> > safe_shared_from_this();
bool shutdown();
/// Handle completion of a receive operation.
@@ -229,12 +227,8 @@ namespace net_utils
std::map<std::string, t_connection_type> server_type_map;
void create_server_type_map();
bool init_server(uint32_t port, const std::string& address = "0.0.0.0",
uint32_t port_ipv6 = 0, const std::string& address_ipv6 = "::", bool use_ipv6 = false, bool require_ipv4 = true,
ssl_options_t ssl_options = ssl_support_t::e_ssl_support_autodetect);
bool init_server(const std::string port, const std::string& address = "0.0.0.0",
const std::string port_ipv6 = "", const std::string address_ipv6 = "::", bool use_ipv6 = false, bool require_ipv4 = true,
ssl_options_t ssl_options = ssl_support_t::e_ssl_support_autodetect);
bool init_server(uint32_t port, const std::string address = "0.0.0.0", ssl_options_t ssl_options = ssl_support_t::e_ssl_support_autodetect);
bool init_server(const std::string port, const std::string& address = "0.0.0.0", ssl_options_t ssl_options = ssl_support_t::e_ssl_support_autodetect);
/// Run the server's io_service loop.
bool run_server(size_t threads_count, bool wait = true, const boost::thread::attributes& attrs = boost::thread::attributes());
@@ -271,17 +265,10 @@ namespace net_utils
typename t_protocol_handler::config_type& get_config_object()
{
assert(m_state != nullptr); // always set in constructor
return *m_state;
}
std::shared_ptr<typename t_protocol_handler::config_type> get_config_shared()
{
assert(m_state != nullptr); // always set in constructor
return {m_state};
return m_state->config;
}
int get_binded_port(){return m_port;}
int get_binded_port_ipv6(){return m_port_ipv6;}
long get_connections_count() const
{
@@ -352,13 +339,11 @@ namespace net_utils
/// Run the server's io_service loop.
bool worker_thread();
/// Handle completion of an asynchronous accept operation.
void handle_accept_ipv4(const boost::system::error_code& e);
void handle_accept_ipv6(const boost::system::error_code& e);
void handle_accept(const boost::system::error_code& e, bool ipv6 = false);
void handle_accept(const boost::system::error_code& e);
bool is_thread_worker();
const std::shared_ptr<typename connection<t_protocol_handler>::shared_state> m_state;
const boost::shared_ptr<typename connection<t_protocol_handler>::shared_state> m_state;
/// The io_service used to perform asynchronous operations.
struct worker
@@ -375,16 +360,11 @@ namespace net_utils
/// Acceptor used to listen for incoming connections.
boost::asio::ip::tcp::acceptor acceptor_;
boost::asio::ip::tcp::acceptor acceptor_ipv6;
epee::net_utils::network_address default_remote;
std::atomic<bool> m_stop_signal_sent;
uint32_t m_port;
uint32_t m_port_ipv6;
std::string m_address;
std::string m_address_ipv6;
bool m_use_ipv6;
bool m_require_ipv4;
std::string m_thread_name_prefix; //TODO: change to enum server_type, now used
size_t m_threads_count;
std::vector<boost::shared_ptr<boost::thread> > m_threads;
@@ -396,8 +376,6 @@ namespace net_utils
/// The next connection to be accepted
connection_ptr new_connection_;
connection_ptr new_connection_ipv6;
boost::mutex connections_mutex;
std::set<connection_ptr> connections_;

View File

@@ -50,8 +50,6 @@
#include <sstream>
#include <iomanip>
#include <algorithm>
#include <functional>
#include <random>
#undef MONERO_DEFAULT_LOG_CATEGORY
#define MONERO_DEFAULT_LOG_CATEGORY "net"
@@ -70,7 +68,7 @@ namespace epee
namespace net_utils
{
template<typename T>
T& check_and_get(std::shared_ptr<T>& ptr)
T& check_and_get(boost::shared_ptr<T>& ptr)
{
CHECK_AND_ASSERT_THROW_MES(bool(ptr), "shared_state cannot be null");
return *ptr;
@@ -83,7 +81,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
template<class t_protocol_handler>
connection<t_protocol_handler>::connection( boost::asio::io_service& io_service,
std::shared_ptr<shared_state> state,
boost::shared_ptr<shared_state> state,
t_connection_type connection_type,
ssl_support_t ssl_support
)
@@ -93,13 +91,13 @@ PRAGMA_WARNING_DISABLE_VS(4355)
template<class t_protocol_handler>
connection<t_protocol_handler>::connection( boost::asio::ip::tcp::socket&& sock,
std::shared_ptr<shared_state> state,
boost::shared_ptr<shared_state> state,
t_connection_type connection_type,
ssl_support_t ssl_support
)
:
connection_basic(std::move(sock), state, ssl_support),
m_protocol_handler(this, check_and_get(state), context),
m_protocol_handler(this, check_and_get(state).config, context),
buffer_ssl_init_fill(0),
m_connection_type( connection_type ),
m_throttle_speed_in("speed_in", "throttle_speed_in"),
@@ -147,18 +145,10 @@ PRAGMA_WARNING_DISABLE_VS(4355)
boost::system::error_code ec;
auto remote_ep = socket().remote_endpoint(ec);
CHECK_AND_NO_ASSERT_MES(!ec, false, "Failed to get remote endpoint: " << ec.message() << ':' << ec.value());
CHECK_AND_NO_ASSERT_MES(remote_ep.address().is_v4() || remote_ep.address().is_v6(), false, "only IPv4 and IPv6 supported here");
CHECK_AND_NO_ASSERT_MES(remote_ep.address().is_v4(), false, "IPv6 not supported here");
if (remote_ep.address().is_v4())
{
const unsigned long ip_ = boost::asio::detail::socket_ops::host_to_network_long(remote_ep.address().to_v4().to_ulong());
return start(is_income, is_multithreaded, ipv4_network_address{uint32_t(ip_), remote_ep.port()});
}
else
{
const auto ip_ = remote_ep.address().to_v6();
return start(is_income, is_multithreaded, ipv6_network_address{ip_, remote_ep.port()});
}
const unsigned long ip_{boost::asio::detail::socket_ops::host_to_network_long(remote_ep.address().to_v4().to_ulong())};
return start(is_income, is_multithreaded, ipv4_network_address{uint32_t(ip_), remote_ep.port()});
CATCH_ENTRY_L0("connection<t_protocol_handler>::start()", false);
}
//---------------------------------------------------------------------------------
@@ -337,14 +327,12 @@ PRAGMA_WARNING_DISABLE_VS(4355)
if (!e)
{
double current_speed_down;
{
CRITICAL_REGION_LOCAL(m_throttle_speed_in_mutex);
m_throttle_speed_in.handle_trafic_exact(bytes_transferred);
current_speed_down = m_throttle_speed_in.get_current_speed();
context.m_current_speed_down = m_throttle_speed_in.get_current_speed();
context.m_max_speed_down = std::max(context.m_max_speed_down, context.m_current_speed_down);
}
context.m_current_speed_down = current_speed_down;
context.m_max_speed_down = std::max(context.m_max_speed_down, current_speed_down);
{
CRITICAL_REGION_LOCAL( epee::net_utils::network_throttle_manager::network_throttle_manager::m_lock_get_global_throttle_in );
@@ -380,6 +368,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
if(!recv_res)
{
//_info("[sock " << socket().native_handle() << "] protocol_want_close");
//some error in protocol, protocol handler ask to close connection
boost::interprocess::ipcdetail::atomic_write32(&m_want_close_connection, 1);
bool do_shutdown = false;
@@ -410,12 +399,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
else
{
_dbg3("[sock " << socket().native_handle() << "] peer closed connection");
bool do_shutdown = false;
CRITICAL_REGION_BEGIN(m_send_que_lock);
if(!m_send_que.size())
do_shutdown = true;
CRITICAL_REGION_END();
if (m_ready_to_close || do_shutdown)
if (m_ready_to_close)
shutdown();
}
m_ready_to_close = true;
@@ -475,7 +459,6 @@ PRAGMA_WARNING_DISABLE_VS(4355)
{
MERROR("SSL handshake failed");
boost::interprocess::ipcdetail::atomic_write32(&m_want_close_connection, 1);
m_ready_to_close = true;
bool do_shutdown = false;
CRITICAL_REGION_BEGIN(m_send_que_lock);
if(!m_send_que.size())
@@ -527,7 +510,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
bool connection<t_protocol_handler>::do_send(byte_slice message) {
bool connection<t_protocol_handler>::do_send(const void* ptr, size_t cb) {
TRY_ENTRY();
// Use safe_shared_from_this, because of this is public method and it can be called on the object being deleted
@@ -536,9 +519,6 @@ PRAGMA_WARNING_DISABLE_VS(4355)
if (m_was_shutdown) return false;
// TODO avoid copy
std::uint8_t const* const message_data = message.data();
const std::size_t message_size = message.size();
const double factor = 32; // TODO config
typedef long long signed int t_safe; // my t_size to avoid any overunderflow in arithmetic
const t_safe chunksize_good = (t_safe)( 1024 * std::max(1.0,factor) );
@@ -548,11 +528,13 @@ PRAGMA_WARNING_DISABLE_VS(4355)
CHECK_AND_ASSERT_MES(! (chunksize_max<0), false, "Negative chunksize_max" ); // make sure it is unsigned before removin sign with cast:
long long unsigned int chunksize_max_unsigned = static_cast<long long unsigned int>( chunksize_max ) ;
if (allow_split && (message_size > chunksize_max_unsigned)) {
if (allow_split && (cb > chunksize_max_unsigned)) {
{ // LOCK: chunking
epee::critical_region_t<decltype(m_chunking_lock)> send_guard(m_chunking_lock); // *** critical ***
MDEBUG("do_send() will SPLIT into small chunks, from packet="<<message_size<<" B for ptr="<<message_data);
MDEBUG("do_send() will SPLIT into small chunks, from packet="<<cb<<" B for ptr="<<ptr);
t_safe all = cb; // all bytes to send
t_safe pos = 0; // current sending position
// 01234567890
// ^^^^ (pos=0, len=4) ; pos:=pos+len, pos=4
// ^^^^ (pos=4, len=4) ; pos:=pos+len, pos=8
@@ -562,25 +544,40 @@ PRAGMA_WARNING_DISABLE_VS(4355)
// char* buf = new char[ bufsize ];
bool all_ok = true;
while (!message.empty()) {
byte_slice chunk = message.take_slice(chunksize_good);
while (pos < all) {
t_safe lenall = all-pos; // length from here to end
t_safe len = std::min( chunksize_good , lenall); // take a smaller part
CHECK_AND_ASSERT_MES(len<=chunksize_good, false, "len too large");
// pos=8; len=4; all=10; len=3;
MDEBUG("chunk_start="<<(void*)chunk.data()<<" ptr="<<message_data<<" pos="<<(chunk.data() - message_data));
MDEBUG("part of " << message.size() << ": pos="<<(chunk.data() - message_data) << " len="<<chunk.size());
CHECK_AND_ASSERT_MES(! (len<0), false, "negative len"); // check before we cast away sign:
unsigned long long int len_unsigned = static_cast<long long int>( len );
CHECK_AND_ASSERT_MES(len>0, false, "len not strictly positive"); // (redundant)
CHECK_AND_ASSERT_MES(len_unsigned < std::numeric_limits<size_t>::max(), false, "Invalid len_unsigned"); // yeap we want strong < then max size, to be sure
void *chunk_start = ((char*)ptr) + pos;
MDEBUG("chunk_start="<<chunk_start<<" ptr="<<ptr<<" pos="<<pos);
CHECK_AND_ASSERT_MES(chunk_start >= ptr, false, "Pointer wraparound"); // not wrapped around address?
//std::memcpy( (void*)buf, chunk_start, len);
bool ok = do_send_chunk(std::move(chunk)); // <====== ***
MDEBUG("part of " << lenall << ": pos="<<pos << " len="<<len);
bool ok = do_send_chunk(chunk_start, len); // <====== ***
all_ok = all_ok && ok;
if (!all_ok) {
MDEBUG("do_send() DONE ***FAILED*** from packet="<<message_size<<" B for ptr="<<message_data);
MDEBUG("do_send() DONE ***FAILED*** from packet="<<cb<<" B for ptr="<<ptr);
MDEBUG("do_send() SEND was aborted in middle of big package - this is mostly harmless "
<< " (e.g. peer closed connection) but if it causes trouble tell us at #monero-dev. " << message_size);
<< " (e.g. peer closed connection) but if it causes trouble tell us at #monero-dev. " << cb);
return false; // partial failure in sending
}
pos = pos+len;
CHECK_AND_ASSERT_MES(pos >0, false, "pos <= 0");
// (in catch block, or uniq pointer) delete buf;
} // each chunk
MDEBUG("do_send() DONE SPLIT from packet="<<message_size<<" B for ptr="<<message_data);
MDEBUG("do_send() DONE SPLIT from packet="<<cb<<" B for ptr="<<ptr);
MDEBUG("do_send() m_connection_type = " << m_connection_type);
@@ -588,7 +585,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
} // LOCK: chunking
} // a big block (to be chunked) - all chunks
else { // small block
return do_send_chunk(std::move(message)); // just send as 1 big chunk
return do_send_chunk(ptr,cb); // just send as 1 big chunk
}
CATCH_ENTRY_L0("connection<t_protocol_handler>::do_send", false);
@@ -596,7 +593,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
bool connection<t_protocol_handler>::do_send_chunk(byte_slice chunk)
bool connection<t_protocol_handler>::do_send_chunk(const void* ptr, size_t cb)
{
TRY_ENTRY();
// Use safe_shared_from_this, because of this is public method and it can be called on the object being deleted
@@ -605,18 +602,16 @@ PRAGMA_WARNING_DISABLE_VS(4355)
return false;
if(m_was_shutdown)
return false;
double current_speed_up;
{
CRITICAL_REGION_LOCAL(m_throttle_speed_out_mutex);
m_throttle_speed_out.handle_trafic_exact(chunk.size());
current_speed_up = m_throttle_speed_out.get_current_speed();
m_throttle_speed_out.handle_trafic_exact(cb);
context.m_current_speed_up = m_throttle_speed_out.get_current_speed();
context.m_max_speed_up = std::max(context.m_max_speed_up, context.m_current_speed_up);
}
context.m_current_speed_up = current_speed_up;
context.m_max_speed_up = std::max(context.m_max_speed_up, current_speed_up);
//_info("[sock " << socket().native_handle() << "] SEND " << cb);
context.m_last_send = time(NULL);
context.m_send_cnt += chunk.size();
context.m_send_cnt += cb;
//some data should be wrote to stream
//request complete
@@ -636,18 +631,8 @@ PRAGMA_WARNING_DISABLE_VS(4355)
return false; // aborted
}*/
using engine = std::mt19937;
engine rng;
std::random_device dev;
std::seed_seq::result_type rand[engine::state_size]{}; // Use complete bit space
std::generate_n(rand, engine::state_size, std::ref(dev));
std::seed_seq seed(rand, rand + engine::state_size);
rng.seed(seed);
long int ms = 250 + (rng() % 50);
MDEBUG("Sleeping because QUEUE is FULL, in " << __FUNCTION__ << " for " << ms << " ms before packet_size="<<chunk.size()); // XXX debug sleep
long int ms = 250 + (rand()%50);
MDEBUG("Sleeping because QUEUE is FULL, in " << __FUNCTION__ << " for " << ms << " ms before packet_size="<<cb); // XXX debug sleep
m_send_que_lock.unlock();
boost::this_thread::sleep(boost::posix_time::milliseconds( ms ) );
m_send_que_lock.lock();
@@ -660,11 +645,12 @@ PRAGMA_WARNING_DISABLE_VS(4355)
}
}
m_send_que.push_back(std::move(chunk));
m_send_que.resize(m_send_que.size()+1);
m_send_que.back().assign((const char*)ptr, cb);
if(m_send_que.size() > 1)
{ // active operation should be in progress, nothing to do, just wait last operation callback
auto size_now = m_send_que.back().size();
auto size_now = cb;
MDEBUG("do_send_chunk() NOW just queues: packet="<<size_now<<" B, is added to queue-size="<<m_send_que.size());
//do_send_handler_delayed( ptr , size_now ); // (((H))) // empty function
@@ -682,7 +668,7 @@ PRAGMA_WARNING_DISABLE_VS(4355)
auto size_now = m_send_que.front().size();
MDEBUG("do_send_chunk() NOW SENSD: packet="<<size_now<<" B");
if (speed_limit_is_enabled())
do_send_handler_write( m_send_que.back().data(), m_send_que.back().size() ); // (((H)))
do_send_handler_write( ptr , size_now ); // (((H)))
CHECK_AND_ASSERT_MES( size_now == m_send_que.front().size(), false, "Unexpected queue size");
reset_timer(get_default_timeout(), false);
@@ -754,11 +740,6 @@ PRAGMA_WARNING_DISABLE_VS(4355)
MERROR("Resetting timer on a dead object");
return;
}
if (m_was_shutdown)
{
MERROR("Setting timer on a shut down object");
return;
}
if (add)
ms += m_timer.expires_from_now();
m_timer.expires_from_now(ms);
@@ -915,18 +896,16 @@ PRAGMA_WARNING_DISABLE_VS(4355)
template<class t_protocol_handler>
boosted_tcp_server<t_protocol_handler>::boosted_tcp_server( t_connection_type connection_type ) :
m_state(std::make_shared<typename connection<t_protocol_handler>::shared_state>()),
m_state(boost::make_shared<typename connection<t_protocol_handler>::shared_state>()),
m_io_service_local_instance(new worker()),
io_service_(m_io_service_local_instance->io_service),
acceptor_(io_service_),
acceptor_ipv6(io_service_),
default_remote(),
m_stop_signal_sent(false), m_port(0),
m_threads_count(0),
m_thread_index(0),
m_connection_type( connection_type ),
new_connection_(),
new_connection_ipv6()
new_connection_()
{
create_server_type_map();
m_thread_name_prefix = "NET";
@@ -934,17 +913,15 @@ PRAGMA_WARNING_DISABLE_VS(4355)
template<class t_protocol_handler>
boosted_tcp_server<t_protocol_handler>::boosted_tcp_server(boost::asio::io_service& extarnal_io_service, t_connection_type connection_type) :
m_state(std::make_shared<typename connection<t_protocol_handler>::shared_state>()),
m_state(boost::make_shared<typename connection<t_protocol_handler>::shared_state>()),
io_service_(extarnal_io_service),
acceptor_(io_service_),
acceptor_ipv6(io_service_),
default_remote(),
m_stop_signal_sent(false), m_port(0),
m_threads_count(0),
m_thread_index(0),
m_connection_type(connection_type),
new_connection_(),
new_connection_ipv6()
new_connection_()
{
create_server_type_map();
m_thread_name_prefix = "NET";
@@ -966,94 +943,29 @@ PRAGMA_WARNING_DISABLE_VS(4355)
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
bool boosted_tcp_server<t_protocol_handler>::init_server(uint32_t port, const std::string& address,
uint32_t port_ipv6, const std::string& address_ipv6, bool use_ipv6, bool require_ipv4,
ssl_options_t ssl_options)
bool boosted_tcp_server<t_protocol_handler>::init_server(uint32_t port, const std::string address, ssl_options_t ssl_options)
{
TRY_ENTRY();
m_stop_signal_sent = false;
m_port = port;
m_port_ipv6 = port_ipv6;
m_address = address;
m_address_ipv6 = address_ipv6;
m_use_ipv6 = use_ipv6;
m_require_ipv4 = require_ipv4;
if (ssl_options)
m_state->configure_ssl(std::move(ssl_options));
std::string ipv4_failed = "";
std::string ipv6_failed = "";
try
{
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(address, boost::lexical_cast<std::string>(port), boost::asio::ip::tcp::resolver::query::canonical_name);
boost::asio::ip::tcp::endpoint endpoint = *resolver.resolve(query);
acceptor_.open(endpoint.protocol());
#if !defined(_WIN32)
acceptor_.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
#endif
acceptor_.bind(endpoint);
acceptor_.listen();
boost::asio::ip::tcp::endpoint binded_endpoint = acceptor_.local_endpoint();
m_port = binded_endpoint.port();
MDEBUG("start accept (IPv4)");
new_connection_.reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, m_state->ssl_options().support));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&boosted_tcp_server<t_protocol_handler>::handle_accept_ipv4, this,
boost::asio::placeholders::error));
}
catch (const std::exception &e)
{
ipv4_failed = e.what();
}
if (ipv4_failed != "")
{
MERROR("Failed to bind IPv4: " << ipv4_failed);
if (require_ipv4)
{
throw std::runtime_error("Failed to bind IPv4 (set to required)");
}
}
if (use_ipv6)
{
try
{
if (port_ipv6 == 0) port_ipv6 = port; // default arg means bind to same port as ipv4
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(address_ipv6, boost::lexical_cast<std::string>(port_ipv6), boost::asio::ip::tcp::resolver::query::canonical_name);
boost::asio::ip::tcp::endpoint endpoint = *resolver.resolve(query);
acceptor_ipv6.open(endpoint.protocol());
#if !defined(_WIN32)
acceptor_ipv6.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
#endif
acceptor_ipv6.set_option(boost::asio::ip::v6_only(true));
acceptor_ipv6.bind(endpoint);
acceptor_ipv6.listen();
boost::asio::ip::tcp::endpoint binded_endpoint = acceptor_ipv6.local_endpoint();
m_port_ipv6 = binded_endpoint.port();
MDEBUG("start accept (IPv6)");
new_connection_ipv6.reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, m_state->ssl_options().support));
acceptor_ipv6.async_accept(new_connection_ipv6->socket(),
boost::bind(&boosted_tcp_server<t_protocol_handler>::handle_accept_ipv6, this,
boost::asio::placeholders::error));
}
catch (const std::exception &e)
{
ipv6_failed = e.what();
}
}
if (use_ipv6 && ipv6_failed != "")
{
MERROR("Failed to bind IPv6: " << ipv6_failed);
if (ipv4_failed != "")
{
throw std::runtime_error("Failed to bind IPv4 and IPv6");
}
}
// Open the acceptor with the option to reuse the address (i.e. SO_REUSEADDR).
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(address, boost::lexical_cast<std::string>(port), boost::asio::ip::tcp::resolver::query::canonical_name);
boost::asio::ip::tcp::endpoint endpoint = *resolver.resolve(query);
acceptor_.open(endpoint.protocol());
acceptor_.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen();
boost::asio::ip::tcp::endpoint binded_endpoint = acceptor_.local_endpoint();
m_port = binded_endpoint.port();
MDEBUG("start accept");
new_connection_.reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, m_state->ssl_options().support));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&boosted_tcp_server<t_protocol_handler>::handle_accept, this,
boost::asio::placeholders::error));
return true;
}
@@ -1072,23 +984,15 @@ PRAGMA_WARNING_DISABLE_VS(4355)
PUSH_WARNINGS
DISABLE_GCC_WARNING(maybe-uninitialized)
template<class t_protocol_handler>
bool boosted_tcp_server<t_protocol_handler>::init_server(const std::string port, const std::string& address,
const std::string port_ipv6, const std::string address_ipv6, bool use_ipv6, bool require_ipv4,
ssl_options_t ssl_options)
bool boosted_tcp_server<t_protocol_handler>::init_server(const std::string port, const std::string& address, ssl_options_t ssl_options)
{
uint32_t p = 0;
uint32_t p_ipv6 = 0;
if (port.size() && !string_tools::get_xtype_from_string(p, port)) {
MERROR("Failed to convert port no = " << port);
return false;
}
if (port_ipv6.size() && !string_tools::get_xtype_from_string(p_ipv6, port_ipv6)) {
MERROR("Failed to convert port no = " << port_ipv6);
return false;
}
return this->init_server(p, address, p_ipv6, address_ipv6, use_ipv6, require_ipv4, std::move(ssl_options));
return this->init_server(p, address, std::move(ssl_options));
}
POP_WARNINGS
//---------------------------------------------------------------------------------
@@ -1180,7 +1084,7 @@ POP_WARNINGS
{
//some problems with the listening socket ?..
_dbg1("Net service stopped without stop request, restarting...");
if(!this->init_server(m_port, m_address, m_port_ipv6, m_address_ipv6, m_use_ipv6, m_require_ipv4))
if(!this->init_server(m_port, m_address))
{
_dbg1("Reiniting service failed, exit.");
return false;
@@ -1246,52 +1150,29 @@ POP_WARNINGS
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
void boosted_tcp_server<t_protocol_handler>::handle_accept_ipv4(const boost::system::error_code& e)
{
this->handle_accept(e, false);
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
void boosted_tcp_server<t_protocol_handler>::handle_accept_ipv6(const boost::system::error_code& e)
{
this->handle_accept(e, true);
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
void boosted_tcp_server<t_protocol_handler>::handle_accept(const boost::system::error_code& e, bool ipv6)
void boosted_tcp_server<t_protocol_handler>::handle_accept(const boost::system::error_code& e)
{
MDEBUG("handle_accept");
boost::asio::ip::tcp::acceptor* current_acceptor = &acceptor_;
connection_ptr* current_new_connection = &new_connection_;
auto accept_function_pointer = &boosted_tcp_server<t_protocol_handler>::handle_accept_ipv4;
if (ipv6)
{
current_acceptor = &acceptor_ipv6;
current_new_connection = &new_connection_ipv6;
accept_function_pointer = &boosted_tcp_server<t_protocol_handler>::handle_accept_ipv6;
}
try
{
if (!e)
{
if (m_connection_type == e_connection_type_RPC) {
const char *ssl_message = "unknown";
switch ((*current_new_connection)->get_ssl_support())
{
case epee::net_utils::ssl_support_t::e_ssl_support_disabled: ssl_message = "disabled"; break;
case epee::net_utils::ssl_support_t::e_ssl_support_enabled: ssl_message = "enabled"; break;
case epee::net_utils::ssl_support_t::e_ssl_support_autodetect: ssl_message = "autodetection"; break;
}
MDEBUG("New server for RPC connections, SSL " << ssl_message);
(*current_new_connection)->setRpcStation(); // hopefully this is not needed actually
}
connection_ptr conn(std::move((*current_new_connection)));
(*current_new_connection).reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, conn->get_ssl_support()));
current_acceptor->async_accept((*current_new_connection)->socket(),
boost::bind(accept_function_pointer, this,
boost::asio::placeholders::error));
if (m_connection_type == e_connection_type_RPC) {
const char *ssl_message = "unknown";
switch (new_connection_->get_ssl_support())
{
case epee::net_utils::ssl_support_t::e_ssl_support_disabled: ssl_message = "disabled"; break;
case epee::net_utils::ssl_support_t::e_ssl_support_enabled: ssl_message = "enabled"; break;
case epee::net_utils::ssl_support_t::e_ssl_support_autodetect: ssl_message = "autodetection"; break;
}
MDEBUG("New server for RPC connections, SSL " << ssl_message);
new_connection_->setRpcStation(); // hopefully this is not needed actually
}
connection_ptr conn(std::move(new_connection_));
new_connection_.reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, conn->get_ssl_support()));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&boosted_tcp_server<t_protocol_handler>::handle_accept, this,
boost::asio::placeholders::error));
boost::asio::socket_base::keep_alive opt(true);
conn->socket().set_option(opt);
@@ -1323,10 +1204,10 @@ POP_WARNINGS
assert(m_state != nullptr); // always set in constructor
_erro("Some problems at accept: " << e.message() << ", connections_count = " << m_state->sock_count);
misc_utils::sleep_no_w(100);
(*current_new_connection).reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, (*current_new_connection)->get_ssl_support()));
current_acceptor->async_accept((*current_new_connection)->socket(),
boost::bind(accept_function_pointer, this,
boost::asio::placeholders::error));
new_connection_.reset(new connection<t_protocol_handler>(io_service_, m_state, m_connection_type, new_connection_->get_ssl_support()));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&boosted_tcp_server<t_protocol_handler>::handle_accept, this,
boost::asio::placeholders::error));
}
//---------------------------------------------------------------------------------
template<class t_protocol_handler>
@@ -1460,84 +1341,23 @@ POP_WARNINGS
epee::misc_utils::auto_scope_leave_caller scope_exit_handler = epee::misc_utils::create_scope_leave_handler([&](){ CRITICAL_REGION_LOCAL(connections_mutex); connections_.erase(new_connection_l); });
boost::asio::ip::tcp::socket& sock_ = new_connection_l->socket();
bool try_ipv6 = false;
//////////////////////////////////////////////////////////////////////////
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(boost::asio::ip::tcp::v4(), adr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
boost::system::error_code resolve_error;
boost::asio::ip::tcp::resolver::iterator iterator;
try
{
//resolving ipv4 address as ipv6 throws, catch here and move on
iterator = resolver.resolve(query, resolve_error);
}
catch (const boost::system::system_error& e)
{
if (!m_use_ipv6 || (resolve_error != boost::asio::error::host_not_found &&
resolve_error != boost::asio::error::host_not_found_try_again))
{
throw;
}
try_ipv6 = true;
}
catch (...)
{
throw;
}
std::string bind_ip_to_use;
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::resolver::iterator end;
if(iterator == end)
{
if (!m_use_ipv6)
{
_erro("Failed to resolve " << adr);
return false;
}
else
{
try_ipv6 = true;
MINFO("Resolving address as IPv4 failed, trying IPv6");
}
}
else
{
bind_ip_to_use = bind_ip;
_erro("Failed to resolve " << adr);
return false;
}
//////////////////////////////////////////////////////////////////////////
if (try_ipv6)
{
boost::asio::ip::tcp::resolver::query query6(boost::asio::ip::tcp::v6(), adr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
iterator = resolver.resolve(query6, resolve_error);
if(iterator == end)
{
_erro("Failed to resolve " << adr);
return false;
}
else
{
if (bind_ip == "0.0.0.0")
{
bind_ip_to_use = "::";
}
else
{
bind_ip_to_use = "";
}
}
}
MDEBUG("Trying to connect to " << adr << ":" << port << ", bind_ip = " << bind_ip_to_use);
//boost::asio::ip::tcp::endpoint remote_endpoint(boost::asio::ip::address::from_string(addr.c_str()), port);
boost::asio::ip::tcp::endpoint remote_endpoint(*iterator);
auto try_connect_result = try_connect(new_connection_l, adr, port, sock_, remote_endpoint, bind_ip_to_use, conn_timeout, ssl_support);
auto try_connect_result = try_connect(new_connection_l, adr, port, sock_, remote_endpoint, bind_ip, conn_timeout, ssl_support);
if (try_connect_result == CONNECT_FAILURE)
return false;
if (ssl_support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect && try_connect_result == CONNECT_NO_SSL)
@@ -1545,7 +1365,7 @@ POP_WARNINGS
// we connected, but could not connect with SSL, try without
MERROR("SSL handshake failed on an autodetect connection, reconnecting without SSL");
new_connection_l->disable_ssl();
try_connect_result = try_connect(new_connection_l, adr, port, sock_, remote_endpoint, bind_ip_to_use, conn_timeout, epee::net_utils::ssl_support_t::e_ssl_support_disabled);
try_connect_result = try_connect(new_connection_l, adr, port, sock_, remote_endpoint, bind_ip, conn_timeout, epee::net_utils::ssl_support_t::e_ssl_support_disabled);
if (try_connect_result != CONNECT_SUCCESS)
return false;
}
@@ -1585,59 +1405,17 @@ POP_WARNINGS
epee::misc_utils::auto_scope_leave_caller scope_exit_handler = epee::misc_utils::create_scope_leave_handler([&](){ CRITICAL_REGION_LOCAL(connections_mutex); connections_.erase(new_connection_l); });
boost::asio::ip::tcp::socket& sock_ = new_connection_l->socket();
bool try_ipv6 = false;
//////////////////////////////////////////////////////////////////////////
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(boost::asio::ip::tcp::v4(), adr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
boost::system::error_code resolve_error;
boost::asio::ip::tcp::resolver::iterator iterator;
try
{
//resolving ipv4 address as ipv6 throws, catch here and move on
iterator = resolver.resolve(query, resolve_error);
}
catch (const boost::system::system_error& e)
{
if (!m_use_ipv6 || (resolve_error != boost::asio::error::host_not_found &&
resolve_error != boost::asio::error::host_not_found_try_again))
{
throw;
}
try_ipv6 = true;
}
catch (...)
{
throw;
}
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::resolver::iterator end;
if(iterator == end)
{
if (!try_ipv6)
{
_erro("Failed to resolve " << adr);
return false;
}
else
{
MINFO("Resolving address as IPv4 failed, trying IPv6");
}
_erro("Failed to resolve " << adr);
return false;
}
if (try_ipv6)
{
boost::asio::ip::tcp::resolver::query query6(boost::asio::ip::tcp::v6(), adr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
iterator = resolver.resolve(query6, resolve_error);
if(iterator == end)
{
_erro("Failed to resolve " << adr);
return false;
}
}
//////////////////////////////////////////////////////////////////////////
boost::asio::ip::tcp::endpoint remote_endpoint(*iterator);
sock_.open(remote_endpoint.protocol());

View File

@@ -49,7 +49,6 @@
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include "byte_slice.h"
#include "net/net_utils_base.h"
#include "net/net_ssl.h"
#include "syncobj.h"
@@ -100,7 +99,7 @@ class connection_basic_pimpl; // PIMPL for this class
class connection_basic { // not-templated base class for rapid developmet of some code parts
// beware of removing const, net_utils::connection is sketchily doing a cast to prevent storing ptr twice
const std::shared_ptr<connection_basic_shared_state> m_state;
const boost::shared_ptr<connection_basic_shared_state> m_state;
public:
std::unique_ptr< connection_basic_pimpl > mI; // my Implementation
@@ -109,7 +108,7 @@ class connection_basic { // not-templated base class for rapid developmet of som
volatile uint32_t m_want_close_connection;
std::atomic<bool> m_was_shutdown;
critical_section m_send_que_lock;
std::deque<byte_slice> m_send_que;
std::list<std::string> m_send_que;
volatile bool m_is_multithreaded;
/// Strand to ensure the connection's handlers are not called concurrently.
boost::asio::io_service::strand strand_;
@@ -119,8 +118,8 @@ class connection_basic { // not-templated base class for rapid developmet of som
public:
// first counter is the ++/-- count of current sockets, the other socket_number is only-increasing ++ number generator
connection_basic(boost::asio::ip::tcp::socket&& socket, std::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support);
connection_basic(boost::asio::io_service &io_service, std::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support);
connection_basic(boost::asio::ip::tcp::socket&& socket, boost::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support);
connection_basic(boost::asio::io_service &io_service, boost::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support);
virtual ~connection_basic() noexcept(false);
@@ -187,6 +186,8 @@ class connection_basic { // not-templated base class for rapid developmet of som
void sleep_before_packet(size_t packet_size, int phase, int q_len); // execute a sleep ; phase is not really used now(?)
static void save_limit_to_file(int limit); ///< for dr-monero
static double get_sleep_time(size_t cb);
static void set_save_graph(bool save_graph);
};
} // nameserver

View File

@@ -49,7 +49,7 @@ namespace net_utils
{
invalid = 0,
public_ = 1, // public is keyword
i2p = 2, // order from here changes priority of selection for origin TXes
i2p = 2,
tor = 3
};

View File

@@ -577,10 +577,6 @@ namespace net_utils
if (query_info.m_http_method != http::http_method_options)
{
res = handle_request(query_info, response);
if (response.m_response_code == 500)
{
m_want_close = true; // close on all "Internal server error"s
}
}
else
{
@@ -591,12 +587,11 @@ namespace net_utils
std::string response_data = get_response_header(response);
//LOG_PRINT_L0("HTTP_SEND: << \r\n" << response_data + response.m_body);
LOG_PRINT_L3("HTTP_RESPONSE_HEAD: << \r\n" << response_data);
LOG_PRINT_L3("HTTP_RESPONSE_HEAD: << \r\n" << response_data);
m_psnd_hndlr->do_send((void*)response_data.data(), response_data.size());
if ((response.m_body.size() && (query_info.m_http_method != http::http_method_head)) || (query_info.m_http_method == http::http_method_options))
response_data += response.m_body;
m_psnd_hndlr->do_send(byte_slice{std::move(response_data)});
m_psnd_hndlr->do_send((void*)response.m_body.data(), response.m_body.size());
m_psnd_hndlr->send_done();
return res;
}

View File

@@ -71,7 +71,7 @@
MINFO(m_conn_context << "calling " << s_pattern); \
if(!callback_f(static_cast<command_type::request&>(req), static_cast<command_type::response&>(resp), &m_conn_context)) \
{ \
MERROR(m_conn_context << "Failed to " << #callback_f << "()"); \
LOG_ERROR("Failed to " << #callback_f << "()"); \
response_info.m_response_code = 500; \
response_info.m_response_comment = "Internal Server Error"; \
return true; \
@@ -99,7 +99,7 @@
MINFO(m_conn_context << "calling " << s_pattern); \
if(!callback_f(static_cast<command_type::request&>(req), static_cast<command_type::response&>(resp), &m_conn_context)) \
{ \
MERROR(m_conn_context << "Failed to " << #callback_f << "()"); \
LOG_ERROR("Failed to " << #callback_f << "()"); \
response_info.m_response_code = 500; \
response_info.m_response_comment = "Internal Server Error"; \
return true; \

View File

@@ -57,7 +57,6 @@ namespace epee
{}
bool init(std::function<void(size_t, uint8_t*)> rng, const std::string& bind_port = "0", const std::string& bind_ip = "0.0.0.0",
const std::string& bind_ipv6_address = "::", bool use_ipv6 = false, bool require_ipv4 = true,
std::vector<std::string> access_control_origins = std::vector<std::string>(),
boost::optional<net_utils::http::login> user = boost::none,
net_utils::ssl_options_t ssl_options = net_utils::ssl_support_t::e_ssl_support_autodetect)
@@ -76,12 +75,8 @@ namespace epee
m_net_server.get_config_object().m_user = std::move(user);
MGINFO("Binding on " << bind_ip << " (IPv4):" << bind_port);
if (use_ipv6)
{
MGINFO("Binding on " << bind_ipv6_address << " (IPv6):" << bind_port);
}
bool res = m_net_server.init_server(bind_port, bind_ip, bind_port, bind_ipv6_address, use_ipv6, require_ipv4, std::move(ssl_options));
MGINFO("Binding on " << bind_ip << ":" << bind_port);
bool res = m_net_server.init_server(bind_port, bind_ip, std::move(ssl_options));
if(!res)
{
LOG_ERROR("Failed to bind server");

View File

@@ -29,11 +29,7 @@
#ifndef _LEVIN_BASE_H_
#define _LEVIN_BASE_H_
#include <cstdint>
#include "byte_slice.h"
#include "net_utils_base.h"
#include "span.h"
#define LEVIN_SIGNATURE 0x0101010101012101LL //Bender's nightmare
@@ -76,8 +72,6 @@ namespace levin
#define LEVIN_PACKET_REQUEST 0x00000001
#define LEVIN_PACKET_RESPONSE 0x00000002
#define LEVIN_PACKET_BEGIN 0x00000004
#define LEVIN_PACKET_END 0x00000008
#define LEVIN_PROTOCOL_VER_0 0
@@ -124,30 +118,9 @@ namespace levin
}
}
//! \return Intialized levin header.
bucket_head2 make_header(uint32_t command, uint64_t msg_size, uint32_t flags, bool expect_response) noexcept;
//! \return A levin notification message.
byte_slice make_notify(int command, epee::span<const std::uint8_t> payload);
/*! Generate a dummy levin message.
\param noise_bytes Total size of the returned `byte_slice`.
\return `nullptr` if `noise_size` is smaller than the levin header.
Otherwise, a dummy levin message. */
byte_slice make_noise_notify(std::size_t noise_bytes);
/*! Generate 1+ levin messages that are identical to the noise message size.
\param noise Each levin message will be identical to the size of this
message. The bytes from this message will be used for padding.
\return `nullptr` if `noise.size()` is less than the levin header size.
Otherwise, a levin notification message OR 2+ levin fragment messages.
Each message is `noise.size()` in length. */
byte_slice make_fragmented_notify(const byte_slice& noise, int command, epee::span<const std::uint8_t> payload);
}
}
#endif //_LEVIN_BASE_H_

View File

@@ -157,9 +157,10 @@ namespace levin
m_current_head.m_return_code = m_config.m_pcommands_handler->invoke(m_current_head.m_command, buff_to_invoke, return_buff, m_conn_context);
m_current_head.m_cb = return_buff.size();
m_current_head.m_have_to_return_data = false;
std::string send_buff((const char*)&m_current_head, sizeof(m_current_head));
send_buff += return_buff;
return_buff.insert(0, (const char*)&m_current_head, sizeof(m_current_head));
if(!m_psnd_hndlr->do_send(byte_slice{std::move(return_buff)}))
if(!m_psnd_hndlr->do_send(send_buff.data(), send_buff.size()))
return false;
}

View File

@@ -32,7 +32,6 @@
#include <boost/smart_ptr/make_shared.hpp>
#include <atomic>
#include <deque>
#include "levin_base.h"
#include "buffer.h"
@@ -92,7 +91,6 @@ public:
int invoke_async(int command, const epee::span<const uint8_t> in_buff, boost::uuids::uuid connection_id, const callback_t &cb, size_t timeout = LEVIN_DEFAULT_TIMEOUT_PRECONFIGURED);
int notify(int command, const epee::span<const uint8_t> in_buff, boost::uuids::uuid connection_id);
int send(epee::byte_slice message, const boost::uuids::uuid& connection_id);
bool close(boost::uuids::uuid connection_id);
bool update_connection_context(const t_connection_context& contxt);
bool request_callback(boost::uuids::uuid connection_id);
@@ -101,8 +99,6 @@ public:
template<class callback_t>
bool for_connection(const boost::uuids::uuid &connection_id, const callback_t &cb);
size_t get_connections_count();
size_t get_out_connections_count();
size_t get_in_connections_count();
void set_handler(levin_commands_handler<t_connection_context>* handler, void (*destroy)(levin_commands_handler<t_connection_context>*) = NULL);
async_protocol_handler_config():m_pcommands_handler(NULL), m_pcommands_handler_destroy(NULL), m_max_packet_size(LEVIN_DEFAULT_MAX_PACKET_SIZE), m_invoke_timeout(LEVIN_DEFAULT_TIMEOUT_PRECONFIGURED)
@@ -119,22 +115,6 @@ public:
template<class t_connection_context = net_utils::connection_context_base>
class async_protocol_handler
{
std::string m_fragment_buffer;
bool send_message(uint32_t command, epee::span<const uint8_t> in_buff, uint32_t flags, bool expect_response)
{
const bucket_head2 head = make_header(command, in_buff.size(), flags, expect_response);
if(!m_pservice_endpoint->do_send(byte_slice{as_byte_span(head), in_buff}))
return false;
MDEBUG(m_connection_context << "LEVIN_PACKET_SENT. [len=" << head.m_cb
<< ", flags" << head.m_flags
<< ", r?=" << head.m_have_to_return_data
<<", cmd = " << head.m_command
<< ", ver=" << head.m_protocol_version);
return true;
}
public:
typedef t_connection_context connection_context;
typedef async_protocol_handler_config<t_connection_context> config_type;
@@ -154,6 +134,7 @@ public:
critical_section m_local_inv_buff_lock;
std::string m_local_inv_buff;
critical_section m_send_lock;
critical_section m_call_lock;
volatile uint32_t m_wait_count;
@@ -272,11 +253,6 @@ public:
bool add_invoke_response_handler(const callback_t &cb, uint64_t timeout, async_protocol_handler& con, int command)
{
CRITICAL_REGION_LOCAL(m_invoke_response_handlers_lock);
if (m_protocol_released)
{
MERROR("Adding response handler to a released object");
return false;
}
boost::shared_ptr<invoke_response_handler_base> handler(boost::make_shared<anvoke_handler<callback_t>>(cb, timeout, con, command));
m_invoke_response_handlers.push_back(handler);
return handler->is_timer_started();
@@ -398,12 +374,7 @@ public:
return false;
}
// these should never fail, but do runtime check for safety
CHECK_AND_ASSERT_MES(m_config.m_max_packet_size >= m_cache_in_buffer.size(), false, "Bad m_cache_in_buffer.size()");
CHECK_AND_ASSERT_MES(m_config.m_max_packet_size - m_cache_in_buffer.size() >= m_fragment_buffer.size(), false, "Bad m_cache_in_buffer.size() + m_fragment_buffer.size()");
// flipped to subtraction; prevent overflow since m_max_packet_size is variable and public
if(cb > m_config.m_max_packet_size - m_cache_in_buffer.size() - m_fragment_buffer.size())
if(m_cache_in_buffer.size() + cb > m_config.m_max_packet_size)
{
MWARNING(m_connection_context << "Maximum packet size exceed!, m_max_packet_size = " << m_config.m_max_packet_size
<< ", packet received " << m_cache_in_buffer.size() + cb
@@ -435,38 +406,8 @@ public:
}
break;
}
{
std::string temp{};
epee::span<const uint8_t> buff_to_invoke = m_cache_in_buffer.carve((std::string::size_type)m_current_head.m_cb);
m_state = stream_state_head;
// abstract_tcp_server2.h manages max bandwidth for a p2p link
if (!(m_current_head.m_flags & (LEVIN_PACKET_REQUEST | LEVIN_PACKET_RESPONSE)))
{
// special noise/fragment command
static constexpr const uint32_t both_flags = (LEVIN_PACKET_BEGIN | LEVIN_PACKET_END);
if ((m_current_head.m_flags & both_flags) == both_flags)
break; // noise message, skip to next message
if (m_current_head.m_flags & LEVIN_PACKET_BEGIN)
m_fragment_buffer.clear();
m_fragment_buffer.append(reinterpret_cast<const char*>(buff_to_invoke.data()), buff_to_invoke.size());
if (!(m_current_head.m_flags & LEVIN_PACKET_END))
break; // skip to next message
if (m_fragment_buffer.size() < sizeof(bucket_head2))
{
MERROR(m_connection_context << "Fragmented data too small for levin header");
return false;
}
temp = std::move(m_fragment_buffer);
m_fragment_buffer.clear();
std::memcpy(std::addressof(m_current_head), std::addressof(temp[0]), sizeof(bucket_head2));
buff_to_invoke = {reinterpret_cast<const uint8_t*>(temp.data()) + sizeof(bucket_head2), temp.size() - sizeof(bucket_head2)};
}
bool is_response = (m_oponent_protocol_ver == LEVIN_PROTOCOL_VER_1 && m_current_head.m_flags&LEVIN_PACKET_RESPONSE);
@@ -515,33 +456,43 @@ public:
if(m_current_head.m_have_to_return_data)
{
std::string return_buff;
const uint32_t return_code = m_config.m_pcommands_handler->invoke(
m_current_head.m_command, buff_to_invoke, return_buff, m_connection_context
);
bucket_head2 head = make_header(m_current_head.m_command, return_buff.size(), LEVIN_PACKET_RESPONSE, false);
head.m_return_code = SWAP32LE(return_code);
return_buff.insert(0, reinterpret_cast<const char*>(&head), sizeof(head));
if(!m_pservice_endpoint->do_send(byte_slice{std::move(return_buff)}))
m_current_head.m_return_code = m_config.m_pcommands_handler->invoke(
m_current_head.m_command,
buff_to_invoke,
return_buff,
m_connection_context);
m_current_head.m_cb = return_buff.size();
m_current_head.m_have_to_return_data = false;
m_current_head.m_protocol_version = LEVIN_PROTOCOL_VER_1;
m_current_head.m_flags = LEVIN_PACKET_RESPONSE;
#if BYTE_ORDER == LITTLE_ENDIAN
std::string send_buff((const char*)&m_current_head, sizeof(m_current_head));
#else
bucket_head2 head = m_current_head;
head.m_signature = SWAP64LE(head.m_signature);
head.m_cb = SWAP64LE(head.m_cb);
head.m_command = SWAP32LE(head.m_command);
head.m_return_code = SWAP32LE(head.m_return_code);
head.m_flags = SWAP32LE(head.m_flags);
head.m_protocol_version = SWAP32LE(head.m_protocol_version);
std::string send_buff((const char*)&head, sizeof(head));
#endif
send_buff += return_buff;
CRITICAL_REGION_BEGIN(m_send_lock);
if(!m_pservice_endpoint->do_send(send_buff.data(), send_buff.size()))
return false;
MDEBUG(m_connection_context << "LEVIN_PACKET_SENT. [len=" << head.m_cb
<< ", flags" << head.m_flags
<< ", r?=" << head.m_have_to_return_data
<<", cmd = " << head.m_command
<< ", ver=" << head.m_protocol_version);
CRITICAL_REGION_END();
MDEBUG(m_connection_context << "LEVIN_PACKET_SENT. [len=" << m_current_head.m_cb
<< ", flags" << m_current_head.m_flags
<< ", r?=" << m_current_head.m_have_to_return_data
<<", cmd = " << m_current_head.m_command
<< ", ver=" << m_current_head.m_protocol_version);
}
else
m_config.m_pcommands_handler->notify(m_current_head.m_command, buff_to_invoke, m_connection_context);
}
// reuse small buffer
if (!temp.empty() && temp.capacity() <= 64 * 1024)
{
temp.clear();
m_fragment_buffer = std::move(temp);
}
}
m_state = stream_state_head;
break;
case stream_state_head:
{
@@ -631,10 +582,26 @@ public:
break;
}
boost::interprocess::ipcdetail::atomic_write32(&m_invoke_buf_ready, 0);
CRITICAL_REGION_BEGIN(m_invoke_response_handlers_lock);
bucket_head2 head = {0};
head.m_signature = SWAP64LE(LEVIN_SIGNATURE);
head.m_cb = SWAP64LE(in_buff.size());
head.m_have_to_return_data = true;
if(!send_message(command, in_buff, LEVIN_PACKET_REQUEST, true))
head.m_flags = SWAP32LE(LEVIN_PACKET_REQUEST);
head.m_command = SWAP32LE(command);
head.m_protocol_version = SWAP32LE(LEVIN_PROTOCOL_VER_1);
boost::interprocess::ipcdetail::atomic_write32(&m_invoke_buf_ready, 0);
CRITICAL_REGION_BEGIN(m_send_lock);
CRITICAL_REGION_LOCAL1(m_invoke_response_handlers_lock);
if(!m_pservice_endpoint->do_send(&head, sizeof(head)))
{
LOG_ERROR_CC(m_connection_context, "Failed to do_send");
err_code = LEVIN_ERROR_CONNECTION;
break;
}
if(!m_pservice_endpoint->do_send(in_buff.data(), in_buff.size()))
{
LOG_ERROR_CC(m_connection_context, "Failed to do_send");
err_code = LEVIN_ERROR_CONNECTION;
@@ -651,7 +618,7 @@ public:
if (LEVIN_OK != err_code)
{
epee::span<const uint8_t> stub_buff = nullptr;
epee::span<const uint8_t> stub_buff{(const uint8_t*)"", 0};
// Never call callback inside critical section, that can cause deadlock
cb(err_code, stub_buff, m_connection_context);
return false;
@@ -673,14 +640,36 @@ public:
if(m_deletion_initiated)
return LEVIN_ERROR_CONNECTION_DESTROYED;
boost::interprocess::ipcdetail::atomic_write32(&m_invoke_buf_ready, 0);
bucket_head2 head = {0};
head.m_signature = SWAP64LE(LEVIN_SIGNATURE);
head.m_cb = SWAP64LE(in_buff.size());
head.m_have_to_return_data = true;
if (!send_message(command, in_buff, LEVIN_PACKET_REQUEST, true))
head.m_flags = SWAP32LE(LEVIN_PACKET_REQUEST);
head.m_command = SWAP32LE(command);
head.m_protocol_version = SWAP32LE(LEVIN_PROTOCOL_VER_1);
boost::interprocess::ipcdetail::atomic_write32(&m_invoke_buf_ready, 0);
CRITICAL_REGION_BEGIN(m_send_lock);
if(!m_pservice_endpoint->do_send(&head, sizeof(head)))
{
LOG_ERROR_CC(m_connection_context, "Failed to send request");
LOG_ERROR_CC(m_connection_context, "Failed to do_send");
return LEVIN_ERROR_CONNECTION;
}
if(!m_pservice_endpoint->do_send(in_buff.data(), in_buff.size()))
{
LOG_ERROR_CC(m_connection_context, "Failed to do_send");
return LEVIN_ERROR_CONNECTION;
}
CRITICAL_REGION_END();
MDEBUG(m_connection_context << "LEVIN_PACKET_SENT. [len=" << head.m_cb
<< ", f=" << head.m_flags
<< ", r?=" << head.m_have_to_return_data
<< ", cmd = " << head.m_command
<< ", ver=" << head.m_protocol_version);
uint64_t ticks_start = misc_utils::get_tick_count();
size_t prev_size = 0;
@@ -725,38 +714,33 @@ public:
if(m_deletion_initiated)
return LEVIN_ERROR_CONNECTION_DESTROYED;
if (!send_message(command, in_buff, LEVIN_PACKET_REQUEST, false))
bucket_head2 head = {0};
head.m_signature = SWAP64LE(LEVIN_SIGNATURE);
head.m_have_to_return_data = false;
head.m_cb = SWAP64LE(in_buff.size());
head.m_command = SWAP32LE(command);
head.m_protocol_version = SWAP32LE(LEVIN_PROTOCOL_VER_1);
head.m_flags = SWAP32LE(LEVIN_PACKET_REQUEST);
CRITICAL_REGION_BEGIN(m_send_lock);
if(!m_pservice_endpoint->do_send(&head, sizeof(head)))
{
LOG_ERROR_CC(m_connection_context, "Failed to send notify message");
LOG_ERROR_CC(m_connection_context, "Failed to do_send()");
return -1;
}
return 1;
}
/*! Sends `message` without adding a levin header. The message must have
been created with `make_notify`, `make_noise_notify` or
`make_fragmented_notify`. See additional instructions for
`make_fragmented_notify`.
\return 1 on success */
int send(byte_slice message)
{
const misc_utils::auto_scope_leave_caller scope_exit_handler = misc_utils::create_scope_leave_handler(
boost::bind(&async_protocol_handler::finish_outer_call, this)
);
if(m_deletion_initiated)
return LEVIN_ERROR_CONNECTION_DESTROYED;
const std::size_t length = message.size();
if (!m_pservice_endpoint->do_send(std::move(message)))
if(!m_pservice_endpoint->do_send(in_buff.data(), in_buff.size()))
{
LOG_ERROR_CC(m_connection_context, "Failed to send message, dropping it");
LOG_ERROR_CC(m_connection_context, "Failed to do_send()");
return -1;
}
CRITICAL_REGION_END();
LOG_DEBUG_CC(m_connection_context, "LEVIN_PACKET_SENT. [len=" << head.m_cb <<
", f=" << head.m_flags <<
", r?=" << head.m_have_to_return_data <<
", cmd = " << head.m_command <<
", ver=" << head.m_protocol_version);
MDEBUG(m_connection_context << "LEVIN_PACKET_SENT. [len=" << (length - sizeof(bucket_head2)) << ", r?=0]");
return 1;
}
//------------------------------------------------------------------------------------------
@@ -796,7 +780,7 @@ void async_protocol_handler_config<t_connection_context>::delete_connections(siz
auto i = connections.end() - 1;
async_protocol_handler<t_connection_context> *conn = m_connects.at(*i);
del_connection(conn);
conn->close();
close(*i);
connections.erase(i);
}
catch (const std::out_of_range &e)
@@ -898,28 +882,6 @@ size_t async_protocol_handler_config<t_connection_context>::get_connections_coun
}
//------------------------------------------------------------------------------------------
template<class t_connection_context>
size_t async_protocol_handler_config<t_connection_context>::get_out_connections_count()
{
CRITICAL_REGION_LOCAL(m_connects_lock);
size_t count = 0;
for (const auto &c: m_connects)
if (!c.second->m_connection_context.m_is_income)
++count;
return count;
}
//------------------------------------------------------------------------------------------
template<class t_connection_context>
size_t async_protocol_handler_config<t_connection_context>::get_in_connections_count()
{
CRITICAL_REGION_LOCAL(m_connects_lock);
size_t count = 0;
for (const auto &c: m_connects)
if (c.second->m_connection_context.m_is_income)
++count;
return count;
}
//------------------------------------------------------------------------------------------
template<class t_connection_context>
void async_protocol_handler_config<t_connection_context>::set_handler(levin_commands_handler<t_connection_context>* handler, void (*destroy)(levin_commands_handler<t_connection_context>*))
{
if (m_pcommands_handler && m_pcommands_handler_destroy)
@@ -937,14 +899,6 @@ int async_protocol_handler_config<t_connection_context>::notify(int command, con
}
//------------------------------------------------------------------------------------------
template<class t_connection_context>
int async_protocol_handler_config<t_connection_context>::send(byte_slice message, const boost::uuids::uuid& connection_id)
{
async_protocol_handler<t_connection_context>* aph;
int r = find_and_lock_connection(connection_id, aph);
return LEVIN_OK == r ? aph->send(std::move(message)) : 0;
}
//------------------------------------------------------------------------------------------
template<class t_connection_context>
bool async_protocol_handler_config<t_connection_context>::close(boost::uuids::uuid connection_id)
{
CRITICAL_REGION_LOCAL(m_connects_lock);

View File

@@ -27,47 +27,13 @@
#pragma once
#include <string>
#include <boost/algorithm/string/predicate.hpp>
#include <boost/asio/ip/address_v6.hpp>
#include "int-util.h"
// IP addresses are kept in network byte order
// Masks below are little endian
// -> convert from network byte order to host byte order before comparing
namespace epee
{
namespace net_utils
{
inline
bool is_ipv6_local(const std::string& ip)
{
auto addr = boost::asio::ip::address_v6::from_string(ip);
// ipv6 link-local unicast addresses are fe80::/10
bool is_link_local = addr.is_link_local();
auto addr_bytes = addr.to_bytes();
// ipv6 unique local unicast addresses start with fc00::/7 -- (fcXX or fdXX)
bool is_unique_local_unicast = (addr_bytes[0] == 0xfc || addr_bytes[0] == 0xfd);
return is_link_local || is_unique_local_unicast;
}
inline
bool is_ipv6_loopback(const std::string& ip)
{
// ipv6 loopback is ::1
return boost::asio::ip::address_v6::from_string(ip).is_loopback();
}
inline
bool is_ip_local(uint32_t ip)
{
ip = SWAP32LE(ip);
/*
local ip area
10.0.0.0 <20> 10.255.255.255
@@ -91,7 +57,6 @@ namespace epee
inline
bool is_ip_loopback(uint32_t ip)
{
ip = SWAP32LE(ip);
if( (ip | 0xffffff00) == 0xffffff7f)
return true;
//MAKE_IP

View File

@@ -31,7 +31,6 @@
//#include <Winsock2.h>
//#include <Ws2tcpip.h>
#include <atomic>
#include <string>
#include <boost/version.hpp>
#include <boost/asio/io_service.hpp>
@@ -108,12 +107,11 @@ namespace net_utils
m_ssl_options(epee::net_utils::ssl_support_t::e_ssl_support_autodetect),
m_initialized(true),
m_connected(false),
m_deadline(m_io_service, std::chrono::steady_clock::time_point::max()),
m_deadline(m_io_service),
m_shutdowned(0),
m_bytes_sent(0),
m_bytes_received(0)
{
check_deadline();
}
/*! The first/second parameters are host/port respectively. The third
@@ -156,7 +154,7 @@ namespace net_utils
}
inline
try_connect_result_t try_connect(const std::string& addr, const std::string& port, std::chrono::milliseconds timeout)
try_connect_result_t try_connect(const std::string& addr, const std::string& port, std::chrono::milliseconds timeout, epee::net_utils::ssl_support_t ssl_support)
{
m_deadline.expires_from_now(timeout);
boost::unique_future<boost::asio::ip::tcp::socket> connection = m_connector(addr, port, m_deadline);
@@ -176,11 +174,11 @@ namespace net_utils
m_connected = true;
m_deadline.expires_at(std::chrono::steady_clock::time_point::max());
// SSL Options
if (m_ssl_options.support == epee::net_utils::ssl_support_t::e_ssl_support_enabled || m_ssl_options.support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect)
if (ssl_support == epee::net_utils::ssl_support_t::e_ssl_support_enabled || ssl_support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect)
{
if (!m_ssl_options.handshake(*m_ssl_socket, boost::asio::ssl::stream_base::client, addr, timeout))
if (!m_ssl_options.handshake(*m_ssl_socket, boost::asio::ssl::stream_base::client, addr))
{
if (m_ssl_options.support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect)
if (ssl_support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect)
{
boost::system::error_code ignored_ec;
m_ssl_socket->next_layer().shutdown(boost::asio::ip::tcp::socket::shutdown_both, ignored_ec);
@@ -219,7 +217,7 @@ namespace net_utils
// Get a list of endpoints corresponding to the server name.
try_connect_result_t try_connect_result = try_connect(addr, port, timeout);
try_connect_result_t try_connect_result = try_connect(addr, port, timeout, m_ssl_options.support);
if (try_connect_result == CONNECT_FAILURE)
return false;
if (m_ssl_options.support == epee::net_utils::ssl_support_t::e_ssl_support_autodetect)
@@ -228,7 +226,7 @@ namespace net_utils
{
MERROR("SSL handshake failed on an autodetect connection, reconnecting without SSL");
m_ssl_options.support = epee::net_utils::ssl_support_t::e_ssl_support_disabled;
if (try_connect(addr, port, timeout) != CONNECT_SUCCESS)
if (try_connect(addr, port, timeout, m_ssl_options.support) != CONNECT_SUCCESS)
return false;
}
}
@@ -564,7 +562,7 @@ namespace net_utils
{
m_deadline.cancel();
boost::system::error_code ec;
if(m_ssl_options)
if(m_ssl_options.support != ssl_support_t::e_ssl_support_disabled)
shutdown_ssl();
m_ssl_socket->next_layer().cancel(ec);
if(ec)

View File

@@ -94,7 +94,7 @@ namespace net_utils
return true;
}
inline
inline
bool parse_uri(const std::string uri, http::uri_content& content)
{
@@ -128,51 +128,11 @@ namespace net_utils
return true;
}
inline
bool parse_url_ipv6(const std::string url_str, http::url_content& content)
{
STATIC_REGEXP_EXPR_1(rexp_match_uri, "^((.*?)://)?(\\[(.*)\\](:(\\d+))?)(.*)?", boost::regex::icase | boost::regex::normal);
// 12 3 4 5 6 7
content.port = 0;
boost::smatch result;
if(!(boost::regex_search(url_str, result, rexp_match_uri, boost::match_default) && result[0].matched))
{
LOG_PRINT_L1("[PARSE URI] regex not matched for uri: " << rexp_match_uri);
//content.m_path = uri;
return false;
}
if(result[2].matched)
{
content.schema = result[2];
}
if(result[4].matched)
{
content.host = result[4];
}
else // if host not matched, matching should be considered failed
{
return false;
}
if(result[6].matched)
{
content.port = boost::lexical_cast<uint64_t>(result[6]);
}
if(result[7].matched)
{
content.uri = result[7];
return parse_uri(result[7], content.m_uri_content);
}
return true;
}
inline
inline
bool parse_url(const std::string url_str, http::url_content& content)
{
if (parse_url_ipv6(url_str, content)) return true;
///iframe_test.html?api_url=http://api.vk.com/api.php&api_id=3289090&api_settings=1&viewer_id=562964060&viewer_type=0&sid=0aad8d1c5713130f9ca0076f2b7b47e532877424961367d81e7fa92455f069be7e21bc3193cbd0be11895&secret=368ebbc0ef&access_token=668bc03f43981d883f73876ffff4aa8564254b359cc745dfa1b3cde7bdab2e94105d8f6d8250717569c0a7&user_id=0&group_id=0&is_app_user=1&auth_key=d2f7a895ca5ff3fdb2a2a8ae23fe679a&language=0&parent_language=0&ad_info=ElsdCQBaQlxiAQRdFUVUXiN2AVBzBx5pU1BXIgZUJlIEAWcgAUoLQg==&referrer=unknown&lc_name=9834b6a3&hash=
//STATIC_REGEXP_EXPR_1(rexp_match_uri, "^([^?#]*)(\\?([^#]*))?(#(.*))?", boost::regex::icase | boost::regex::normal);
STATIC_REGEXP_EXPR_1(rexp_match_uri, "^((.*?)://)?(([^/:]*)(:(\\d+))?)(.*)?", boost::regex::icase | boost::regex::normal);

View File

@@ -29,7 +29,6 @@
#ifndef _NET_SSL_H
#define _NET_SSL_H
#include <chrono>
#include <stdint.h>
#include <string>
#include <vector>
@@ -129,11 +128,7 @@ namespace net_utils
\return True if the SSL handshake completes with peer verification
settings. */
bool handshake(
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> &socket,
boost::asio::ssl::stream_base::handshake_type type,
const std::string& host = {},
std::chrono::milliseconds timeout = std::chrono::seconds(15)) const;
bool handshake(boost::asio::ssl::stream<boost::asio::ip::tcp::socket> &socket, boost::asio::ssl::stream_base::handshake_type type, const std::string& host = {}) const;
};
// https://security.stackexchange.com/questions/34780/checking-client-hello-for-https-classification

View File

@@ -31,20 +31,17 @@
#include <boost/uuid/uuid.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/address_v6.hpp>
#include <typeinfo>
#include <type_traits>
#include "byte_slice.h"
#include "enums.h"
#include "misc_log_ex.h"
#include "serialization/keyvalue_serialization.h"
#include "int-util.h"
#include "misc_log_ex.h"
#undef MONERO_DEFAULT_LOG_CATEGORY
#define MONERO_DEFAULT_LOG_CATEGORY "net"
#ifndef MAKE_IP
#define MAKE_IP( a1, a2, a3, a4 ) (a1|(a2<<8)|(a3<<16)|(((uint32_t)a4)<<24))
#define MAKE_IP( a1, a2, a3, a4 ) (a1|(a2<<8)|(a3<<16)|(a4<<24))
#endif
#if BOOST_VERSION >= 107000
@@ -92,20 +89,7 @@ namespace net_utils
static constexpr bool is_blockable() noexcept { return true; }
BEGIN_KV_SERIALIZE_MAP()
if (is_store)
{
KV_SERIALIZE_VAL_POD_AS_BLOB_N(m_ip, "ip")
uint32_t ip = SWAP32LE(this_ref.m_ip);
epee::serialization::selector<is_store>::serialize(ip, stg, hparent_section, "m_ip");
}
else
{
if (!epee::serialization::selector<is_store>::serialize_t_val_as_blob(this_ref.m_ip, stg, hparent_section, "ip"))
{
KV_SERIALIZE(m_ip)
const_cast<ipv4_network_address&>(this_ref).m_ip = SWAP32LE(this_ref.m_ip);
}
}
KV_SERIALIZE(m_ip)
KV_SERIALIZE(m_port)
END_KV_SERIALIZE_MAP()
};
@@ -123,106 +107,6 @@ namespace net_utils
inline bool operator>=(const ipv4_network_address& lhs, const ipv4_network_address& rhs) noexcept
{ return !lhs.less(rhs); }
class ipv4_network_subnet
{
uint32_t m_ip;
uint8_t m_mask;
public:
constexpr ipv4_network_subnet() noexcept
: ipv4_network_subnet(0, 0)
{}
constexpr ipv4_network_subnet(uint32_t ip, uint8_t mask) noexcept
: m_ip(ip), m_mask(mask) {}
bool equal(const ipv4_network_subnet& other) const noexcept;
bool less(const ipv4_network_subnet& other) const noexcept;
constexpr bool is_same_host(const ipv4_network_subnet& other) const noexcept
{ return subnet() == other.subnet(); }
bool matches(const ipv4_network_address &address) const;
constexpr uint32_t subnet() const noexcept { return m_ip & ~(0xffffffffull << m_mask); }
std::string str() const;
std::string host_str() const;
bool is_loopback() const;
bool is_local() const;
static constexpr address_type get_type_id() noexcept { return address_type::invalid; }
static constexpr zone get_zone() noexcept { return zone::public_; }
static constexpr bool is_blockable() noexcept { return true; }
BEGIN_KV_SERIALIZE_MAP()
KV_SERIALIZE(m_ip)
KV_SERIALIZE(m_mask)
END_KV_SERIALIZE_MAP()
};
inline bool operator==(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return lhs.equal(rhs); }
inline bool operator!=(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return !lhs.equal(rhs); }
inline bool operator<(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return lhs.less(rhs); }
inline bool operator<=(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return !rhs.less(lhs); }
inline bool operator>(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return rhs.less(lhs); }
inline bool operator>=(const ipv4_network_subnet& lhs, const ipv4_network_subnet& rhs) noexcept
{ return !lhs.less(rhs); }
class ipv6_network_address
{
protected:
boost::asio::ip::address_v6 m_address;
uint16_t m_port;
public:
ipv6_network_address()
: ipv6_network_address(boost::asio::ip::address_v6::loopback(), 0)
{}
ipv6_network_address(const boost::asio::ip::address_v6& ip, uint16_t port)
: m_address(ip), m_port(port)
{
}
bool equal(const ipv6_network_address& other) const noexcept;
bool less(const ipv6_network_address& other) const noexcept;
bool is_same_host(const ipv6_network_address& other) const noexcept
{ return m_address == other.m_address; }
boost::asio::ip::address_v6 ip() const noexcept { return m_address; }
uint16_t port() const noexcept { return m_port; }
std::string str() const;
std::string host_str() const;
bool is_loopback() const;
bool is_local() const;
static constexpr address_type get_type_id() noexcept { return address_type::ipv6; }
static constexpr zone get_zone() noexcept { return zone::public_; }
static constexpr bool is_blockable() noexcept { return true; }
static const uint8_t ID = 2;
BEGIN_KV_SERIALIZE_MAP()
boost::asio::ip::address_v6::bytes_type bytes = this_ref.m_address.to_bytes();
epee::serialization::selector<is_store>::serialize_t_val_as_blob(bytes, stg, hparent_section, "addr");
const_cast<boost::asio::ip::address_v6&>(this_ref.m_address) = boost::asio::ip::address_v6(bytes);
KV_SERIALIZE(m_port)
END_KV_SERIALIZE_MAP()
};
inline bool operator==(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return lhs.equal(rhs); }
inline bool operator!=(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return !lhs.equal(rhs); }
inline bool operator<(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return lhs.less(rhs); }
inline bool operator<=(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return !rhs.less(lhs); }
inline bool operator>(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return rhs.less(lhs); }
inline bool operator>=(const ipv6_network_address& lhs, const ipv6_network_address& rhs) noexcept
{ return !lhs.less(rhs); }
class network_address
{
struct interface
@@ -330,8 +214,6 @@ namespace net_utils
{
case address_type::ipv4:
return this_ref.template serialize_addr<ipv4_network_address>(is_store_, stg, hparent_section);
case address_type::ipv6:
return this_ref.template serialize_addr<ipv6_network_address>(is_store_, stg, hparent_section);
case address_type::tor:
return this_ref.template serialize_addr<net::tor_address>(is_store_, stg, hparent_section);
case address_type::i2p:
@@ -368,7 +250,7 @@ namespace net_utils
const network_address m_remote_address;
const bool m_is_income;
const time_t m_started;
const bool m_ssl;
const time_t m_ssl;
time_t m_last_recv;
time_t m_last_send;
uint64_t m_recv_cnt;
@@ -439,7 +321,7 @@ namespace net_utils
/************************************************************************/
struct i_service_endpoint
{
virtual bool do_send(byte_slice message)=0;
virtual bool do_send(const void* ptr, size_t cb)=0;
virtual bool close()=0;
virtual bool send_done()=0;
virtual bool call_run_once_service_io()=0;

View File

@@ -40,7 +40,5 @@ namespace rdln
readline_buffer* m_buffer;
bool m_restart;
};
void clear_screen();
}

View File

@@ -26,7 +26,6 @@
#pragma once
#include <type_traits>
#include <boost/utility/value_init.hpp>
#include <boost/foreach.hpp>
#include "misc_log_ex.h"
@@ -46,20 +45,18 @@ public: \
template<class t_storage> \
bool store( t_storage& st, typename t_storage::hsection hparent_section = nullptr) const\
{\
using type = typename std::remove_const<typename std::remove_reference<decltype(*this)>::type>::type; \
auto &self = const_cast<type&>(*this); \
return self.template serialize_map<true>(st, hparent_section); \
return serialize_map<true>(*this, st, hparent_section);\
}\
template<class t_storage> \
bool _load( t_storage& stg, typename t_storage::hsection hparent_section = nullptr)\
{\
return serialize_map<false>(stg, hparent_section);\
return serialize_map<false>(*this, stg, hparent_section);\
}\
template<class t_storage> \
bool load( t_storage& stg, typename t_storage::hsection hparent_section = nullptr)\
{\
try{\
return serialize_map<false>(stg, hparent_section);\
return serialize_map<false>(*this, stg, hparent_section);\
}\
catch(const std::exception& err) \
{ \
@@ -68,22 +65,13 @@ public: \
return false; \
}\
}\
/*template<typename T> T& this_type_resolver() { return *this; }*/ \
/*using this_type = std::result_of<decltype(this_type_resolver)>::type;*/ \
template<bool is_store, class t_storage> \
bool serialize_map(t_storage& stg, typename t_storage::hsection hparent_section) \
{ \
decltype(*this) &this_ref = *this;
template<bool is_store, class this_type, class t_storage> \
static bool serialize_map(this_type& this_ref, t_storage& stg, typename t_storage::hsection hparent_section) \
{
#define KV_SERIALIZE_N(varialble, val_name) \
epee::serialization::selector<is_store>::serialize(this_ref.varialble, stg, hparent_section, val_name);
#define KV_SERIALIZE_PARENT(type) \
do { \
if (!((type*)this)->serialize_map<is_store, t_storage>(stg, hparent_section)) \
return false; \
} while(0);
template<typename T> inline void serialize_default(const T &t, T v) { }
template<typename T> inline void serialize_default(T &t, T v) { t = v; }

View File

@@ -101,7 +101,7 @@ namespace epee
}
template<class t_request, class t_response, class t_transport>
bool invoke_http_json_rpc(const boost::string_ref uri, std::string method_name, const t_request& out_struct, t_response& result_struct, epee::json_rpc::error &error_struct, t_transport& transport, std::chrono::milliseconds timeout = std::chrono::seconds(15), const boost::string_ref http_method = "GET", const std::string& req_id = "0")
bool invoke_http_json_rpc(const boost::string_ref uri, std::string method_name, const t_request& out_struct, t_response& result_struct, t_transport& transport, std::chrono::milliseconds timeout = std::chrono::seconds(15), const boost::string_ref http_method = "GET", const std::string& req_id = "0")
{
epee::json_rpc::request<t_request> req_t = AUTO_VAL_INIT(req_t);
req_t.jsonrpc = "2.0";
@@ -111,12 +111,10 @@ namespace epee
epee::json_rpc::response<t_response, epee::json_rpc::error> resp_t = AUTO_VAL_INIT(resp_t);
if(!epee::net_utils::invoke_http_json(uri, req_t, resp_t, transport, timeout, http_method))
{
error_struct = {};
return false;
}
if(resp_t.error.code || resp_t.error.message.size())
{
error_struct = resp_t.error;
LOG_ERROR("RPC call of \"" << req_t.method << "\" returned error: " << resp_t.error.code << ", message: " << resp_t.error.message);
return false;
}
@@ -124,13 +122,6 @@ namespace epee
return true;
}
template<class t_request, class t_response, class t_transport>
bool invoke_http_json_rpc(const boost::string_ref uri, std::string method_name, const t_request& out_struct, t_response& result_struct, t_transport& transport, std::chrono::milliseconds timeout = std::chrono::seconds(15), const boost::string_ref http_method = "GET", const std::string& req_id = "0")
{
epee::json_rpc::error error_struct;
return invoke_http_json_rpc(uri, method_name, out_struct, result_struct, error_struct, transport, timeout, http_method, req_id);
}
template<class t_command, class t_transport>
bool invoke_http_json_rpc(const boost::string_ref uri, typename t_command::request& out_struct, typename t_command::response& result_struct, t_transport& transport, std::chrono::milliseconds timeout = std::chrono::seconds(15), const boost::string_ref http_method = "GET", const std::string& req_id = "0")
{

View File

@@ -31,9 +31,6 @@
#include <algorithm>
#include <boost/utility/string_ref.hpp>
#undef MONERO_DEFAULT_LOG_CATEGORY
#define MONERO_DEFAULT_LOG_CATEGORY "serialization"
namespace epee
{
namespace misc_utils
@@ -65,26 +62,6 @@ namespace misc_utils
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
};
static const constexpr unsigned char isx[256] =
{
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 10, 11, 12, 13, 14, 15, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 10, 11, 12, 13, 14, 15, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
};
inline bool isspace(char c)
{
return lut[(uint8_t)c] & 8;
@@ -185,42 +162,6 @@ namespace misc_utils
val.push_back('\\');break;
case '/': //Slash character
val.push_back('/');break;
case 'u': //Unicode code point
if (buf_end - it < 4)
{
ASSERT_MES_AND_THROW("Invalid Unicode escape sequence");
}
else
{
uint32_t dst = 0;
for (int i = 0; i < 4; ++i)
{
const unsigned char tmp = isx[(int)*++it];
CHECK_AND_ASSERT_THROW_MES(tmp != 0xff, "Bad Unicode encoding");
dst = dst << 4 | tmp;
}
// encode as UTF-8
if (dst <= 0x7f)
{
val.push_back(dst);
}
else if (dst <= 0x7ff)
{
val.push_back(0xc0 | (dst >> 6));
val.push_back(0x80 | (dst & 0x3f));
}
else if (dst <= 0xffff)
{
val.push_back(0xe0 | (dst >> 12));
val.push_back(0x80 | ((dst >> 6) & 0x3f));
val.push_back(0x80 | (dst & 0x3f));
}
else
{
ASSERT_MES_AND_THROW("Unicode code point is out or range");
}
}
break;
default:
val.push_back(*it);
LOG_PRINT_L0("Unknown escape sequence :\"\\" << *it << "\"");

View File

@@ -1,46 +0,0 @@
// Copyright (c) 2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#pragma once
#include "int-util.h"
template<typename T> T convert_swapper(T t) { return t; }
template<> inline uint16_t convert_swapper(uint16_t t) { return SWAP16LE(t); }
template<> inline int16_t convert_swapper(int16_t t) { return SWAP16LE((uint16_t&)t); }
template<> inline uint32_t convert_swapper(uint32_t t) { return SWAP32LE(t); }
template<> inline int32_t convert_swapper(int32_t t) { return SWAP32LE((uint32_t&)t); }
template<> inline uint64_t convert_swapper(uint64_t t) { return SWAP64LE(t); }
template<> inline int64_t convert_swapper(int64_t t) { return SWAP64LE((uint64_t&)t); }
template<> inline double convert_swapper(double t) { union { uint64_t u; double d; } u; u.d = t; u.u = SWAP64LE(u.u); return u.d; }
#if BYTE_ORDER == BIG_ENDIAN
#define CONVERT_POD(x) convert_swapper(x)
#else
#define CONVERT_POD(x) (x)
#endif

View File

@@ -30,7 +30,6 @@
#include "misc_language.h"
#include "portable_storage_base.h"
#include "portable_storage_bin_utils.h"
#ifdef EPEE_PORTABLE_STORAGE_RECURSION_LIMIT
#define EPEE_PORTABLE_STORAGE_RECURSION_LIMIT_INTERNAL EPEE_PORTABLE_STORAGE_RECURSION_LIMIT
@@ -118,7 +117,6 @@ namespace epee
RECURSION_LIMITATION();
static_assert(std::is_pod<t_pod_type>::value, "POD type expected");
read(&pod_val, sizeof(pod_val));
pod_val = CONVERT_POD(pod_val);
}
template<class t_type>
@@ -142,7 +140,7 @@ namespace epee
sa.reserve(size);
//TODO: add some optimization here later
while(size--)
sa.m_array.push_back(read<type_name>());
sa.m_array.push_back(read<type_name>());
return storage_entry(array_entry(sa));
}

View File

@@ -31,7 +31,6 @@
#include "pragma_comp_defs.h"
#include "misc_language.h"
#include "portable_storage_base.h"
#include "portable_storage_bin_utils.h"
namespace epee
{
@@ -41,9 +40,8 @@ namespace epee
template<class pack_value, class t_stream>
size_t pack_varint_t(t_stream& strm, uint8_t type_or, size_t& pv)
{
pack_value v = pv << 2;
pack_value v = (*((pack_value*)&pv)) << 2;
v |= type_or;
v = CONVERT_POD(v);
strm.write((const char*)&v, sizeof(pack_value));
return sizeof(pack_value);
}
@@ -95,11 +93,8 @@ namespace epee
uint8_t type = contained_type|SERIALIZE_FLAG_ARRAY;
m_strm.write((const char*)&type, 1);
pack_varint(m_strm, arr_pod.m_array.size());
for(t_pod_type x: arr_pod.m_array)
{
x = CONVERT_POD(x);
for(const t_pod_type& x: arr_pod.m_array)
m_strm.write((const char*)&x, sizeof(t_pod_type));
}
return true;
}
@@ -152,8 +147,7 @@ namespace epee
bool pack_pod_type(uint8_t type, const pod_type& v)
{
m_strm.write((const char*)&type, 1);
pod_type v0 = CONVERT_POD(v);
m_strm.write((const char*)&v0, sizeof(pod_type));
m_strm.write((const char*)&v, sizeof(pod_type));
return true;
}
//section, array_entry

View File

@@ -59,6 +59,26 @@
#pragma comment (lib, "Rpcrt4.lib")
#endif
static const constexpr unsigned char isx[256] =
{
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 10, 11, 12, 13, 14, 15, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 10, 11, 12, 13, 14, 15, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
};
namespace epee
{
namespace string_tools
@@ -79,10 +99,10 @@ namespace string_tools
for(size_t i = 0; i < s.size(); i += 2)
{
int tmp = *src++;
tmp = epee::misc_utils::parse::isx[tmp];
tmp = isx[tmp];
if (tmp == 0xff) return false;
int t2 = *src++;
t2 = epee::misc_utils::parse::isx[t2];
t2 = isx[t2];
if (t2 == 0xff) return false;
*dst++ = (tmp << 4) | t2;
}

View File

@@ -150,6 +150,81 @@ namespace epee
};
#if defined(WINDWOS_PLATFORM)
class shared_critical_section
{
public:
shared_critical_section()
{
::InitializeSRWLock(&m_srw_lock);
}
~shared_critical_section()
{}
bool lock_shared()
{
AcquireSRWLockShared(&m_srw_lock);
return true;
}
bool unlock_shared()
{
ReleaseSRWLockShared(&m_srw_lock);
return true;
}
bool lock_exclusive()
{
::AcquireSRWLockExclusive(&m_srw_lock);
return true;
}
bool unlock_exclusive()
{
::ReleaseSRWLockExclusive(&m_srw_lock);
return true;
}
private:
SRWLOCK m_srw_lock;
};
class shared_guard
{
public:
shared_guard(shared_critical_section& ref_sec):m_ref_sec(ref_sec)
{
m_ref_sec.lock_shared();
}
~shared_guard()
{
m_ref_sec.unlock_shared();
}
private:
shared_critical_section& m_ref_sec;
};
class exclusive_guard
{
public:
exclusive_guard(shared_critical_section& ref_sec):m_ref_sec(ref_sec)
{
m_ref_sec.lock_exclusive();
}
~exclusive_guard()
{
m_ref_sec.unlock_exclusive();
}
private:
shared_critical_section& m_ref_sec;
};
#endif
#define SHARED_CRITICAL_REGION_BEGIN(x) { shared_guard critical_region_var(x)
#define EXCLUSIVE_CRITICAL_REGION_BEGIN(x) { exclusive_guard critical_region_var(x)
#define CRITICAL_REGION_LOCAL(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var(x)
#define CRITICAL_REGION_BEGIN(x) { boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep())); epee::critical_region_t<decltype(x)> critical_region_var(x)
#define CRITICAL_REGION_LOCAL1(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var1(x)
@@ -157,6 +232,22 @@ namespace epee
#define CRITICAL_REGION_END() }
#if defined(WINDWOS_PLATFORM)
inline const char* get_wait_for_result_as_text(DWORD res)
{
switch(res)
{
case WAIT_ABANDONED: return "WAIT_ABANDONED";
case WAIT_TIMEOUT: return "WAIT_TIMEOUT";
case WAIT_OBJECT_0: return "WAIT_OBJECT_0";
case WAIT_OBJECT_0+1: return "WAIT_OBJECT_1";
case WAIT_OBJECT_0+2: return "WAIT_OBJECT_2";
default: return "UNKNOWN CODE";
}
}
#endif
}
#endif

View File

@@ -26,11 +26,10 @@
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
add_library(epee STATIC byte_slice.cpp hex.cpp http_auth.cpp mlog.cpp net_helper.cpp net_utils_base.cpp string_tools.cpp wipeable_string.cpp
levin_base.cpp memwipe.c connection_basic.cpp network_throttle.cpp network_throttle-detail.cpp mlocker.cpp buffer.cpp net_ssl.cpp
int-util.cpp)
add_library(epee STATIC hex.cpp http_auth.cpp mlog.cpp net_helper.cpp net_utils_base.cpp string_tools.cpp wipeable_string.cpp memwipe.c
connection_basic.cpp network_throttle.cpp network_throttle-detail.cpp mlocker.cpp buffer.cpp net_ssl.cpp)
if (USE_READLINE AND (GNU_READLINE_FOUND OR (DEPENDS AND NOT MINGW)))
if (USE_READLINE AND GNU_READLINE_FOUND)
add_library(epee_readline STATIC readline_buffer.cpp)
endif()
@@ -63,7 +62,7 @@ target_link_libraries(epee
${OPENSSL_LIBRARIES}
${EXTRA_LIBRARIES})
if (USE_READLINE AND (GNU_READLINE_FOUND OR (DEPENDS AND NOT MINGW)))
if (USE_READLINE AND GNU_READLINE_FOUND)
target_link_libraries(epee_readline
PUBLIC
easylogging

View File

@@ -1,209 +0,0 @@
// Copyright (c) 2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <atomic>
#include <cstdlib>
#include <cstring>
#include <limits>
#include <stdexcept>
#include <utility>
#include "byte_slice.h"
namespace epee
{
struct byte_slice_data
{
byte_slice_data() noexcept
: ref_count(1)
{}
virtual ~byte_slice_data() noexcept
{}
std::atomic<std::size_t> ref_count;
};
void release_byte_slice::operator()(byte_slice_data* ptr) const noexcept
{
if (ptr && --(ptr->ref_count) == 0)
{
ptr->~byte_slice_data();
free(ptr);
}
}
namespace
{
template<typename T>
struct adapted_byte_slice final : byte_slice_data
{
explicit adapted_byte_slice(T&& buffer)
: byte_slice_data(), buffer(std::move(buffer))
{}
virtual ~adapted_byte_slice() noexcept final override
{}
const T buffer;
};
// bytes "follow" this structure in memory slab
struct raw_byte_slice final : byte_slice_data
{
raw_byte_slice() noexcept
: byte_slice_data()
{}
virtual ~raw_byte_slice() noexcept final override
{}
};
/* This technique is not-standard, but allows for the reference count and
memory for the bytes (when given a list of spans) to be allocated in a
single call. In that situation, the dynamic sized bytes are after/behind
the raw_byte_slice class. The C runtime has to track the number of bytes
allocated regardless, so free'ing is relatively easy. */
template<typename T, typename... U>
std::unique_ptr<T, release_byte_slice> allocate_slice(std::size_t extra_bytes, U&&... args)
{
if (std::numeric_limits<std::size_t>::max() - sizeof(T) < extra_bytes)
throw std::bad_alloc{};
void* const ptr = malloc(sizeof(T) + extra_bytes);
if (ptr == nullptr)
throw std::bad_alloc{};
try
{
new (ptr) T{std::forward<U>(args)...};
}
catch (...)
{
free(ptr);
throw;
}
return std::unique_ptr<T, release_byte_slice>{reinterpret_cast<T*>(ptr)};
}
} // anonymous
byte_slice::byte_slice(byte_slice_data* storage, span<const std::uint8_t> portion) noexcept
: storage_(storage), portion_(portion)
{
if (storage_)
++(storage_->ref_count);
}
template<typename T>
byte_slice::byte_slice(const adapt_buffer, T&& buffer)
: storage_(nullptr), portion_(to_byte_span(to_span(buffer)))
{
if (!buffer.empty())
storage_ = allocate_slice<adapted_byte_slice<T>>(0, std::move(buffer));
}
byte_slice::byte_slice(std::initializer_list<span<const std::uint8_t>> sources)
: byte_slice()
{
std::size_t space_needed = 0;
for (const auto source : sources)
space_needed += source.size();
if (space_needed)
{
auto storage = allocate_slice<raw_byte_slice>(space_needed);
span<std::uint8_t> out{reinterpret_cast<std::uint8_t*>(storage.get() + 1), space_needed};
portion_ = {out.data(), out.size()};
for (const auto source : sources)
{
std::memcpy(out.data(), source.data(), source.size());
if (out.remove_prefix(source.size()) < source.size())
throw std::bad_alloc{}; // size_t overflow on space_needed
}
storage_ = std::move(storage);
}
}
byte_slice::byte_slice(std::string&& buffer)
: byte_slice(adapt_buffer{}, std::move(buffer))
{}
byte_slice::byte_slice(std::vector<std::uint8_t>&& buffer)
: byte_slice(adapt_buffer{}, std::move(buffer))
{}
byte_slice::byte_slice(byte_slice&& source) noexcept
: storage_(std::move(source.storage_)), portion_(source.portion_)
{
source.portion_ = epee::span<const std::uint8_t>{};
}
byte_slice& byte_slice::operator=(byte_slice&& source) noexcept
{
storage_ = std::move(source.storage_);
portion_ = source.portion_;
if (source.storage_ == nullptr)
source.portion_ = epee::span<const std::uint8_t>{};
return *this;
}
std::size_t byte_slice::remove_prefix(std::size_t max_bytes) noexcept
{
max_bytes = portion_.remove_prefix(max_bytes);
if (portion_.empty())
storage_ = nullptr;
return max_bytes;
}
byte_slice byte_slice::take_slice(const std::size_t max_bytes) noexcept
{
byte_slice out{};
std::uint8_t const* const ptr = data();
out.portion_ = {ptr, portion_.remove_prefix(max_bytes)};
if (portion_.empty())
out.storage_ = std::move(storage_); // no atomic inc/dec
else
out = {storage_.get(), out.portion_};
return out;
}
byte_slice byte_slice::get_slice(const std::size_t begin, const std::size_t end) const
{
if (end < begin || portion_.size() < end)
throw std::out_of_range{"bad slice range"};
if (begin == end)
return {};
return {storage_.get(), {portion_.begin() + begin, end - begin}};
}
} // epee

View File

@@ -128,7 +128,7 @@ connection_basic_pimpl::connection_basic_pimpl(const std::string &name) : m_thro
int connection_basic_pimpl::m_default_tos;
// methods:
connection_basic::connection_basic(boost::asio::ip::tcp::socket&& sock, std::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support)
connection_basic::connection_basic(boost::asio::ip::tcp::socket&& sock, boost::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support)
:
m_state(std::move(state)),
mI( new connection_basic_pimpl("peer") ),
@@ -136,7 +136,6 @@ connection_basic::connection_basic(boost::asio::ip::tcp::socket&& sock, std::sha
socket_(GET_IO_SERVICE(sock), get_context(m_state.get())),
m_want_close_connection(false),
m_was_shutdown(false),
m_is_multithreaded(false),
m_ssl_support(ssl_support)
{
// add nullptr checks if removed
@@ -153,7 +152,7 @@ connection_basic::connection_basic(boost::asio::ip::tcp::socket&& sock, std::sha
_note("Spawned connection #"<<mI->m_peer_number<<" to " << remote_addr_str << " currently we have sockets count:" << m_state->sock_count);
}
connection_basic::connection_basic(boost::asio::io_service &io_service, std::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support)
connection_basic::connection_basic(boost::asio::io_service &io_service, boost::shared_ptr<connection_basic_shared_state> state, ssl_support_t ssl_support)
:
m_state(std::move(state)),
mI( new connection_basic_pimpl("peer") ),
@@ -161,7 +160,6 @@ connection_basic::connection_basic(boost::asio::io_service &io_service, std::sha
socket_(io_service, get_context(m_state.get())),
m_want_close_connection(false),
m_was_shutdown(false),
m_is_multithreaded(false),
m_ssl_support(ssl_support)
{
// add nullptr checks if removed
@@ -286,6 +284,9 @@ double connection_basic::get_sleep_time(size_t cb) {
return t;
}
void connection_basic::set_save_graph(bool save_graph) {
}
} // namespace
} // namespace

View File

@@ -1,48 +0,0 @@
// Copyright (c) 2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <boost/multiprecision/cpp_int.hpp>
void div128_64(uint64_t dividend_hi, uint64_t dividend_lo, uint64_t divisor, uint64_t* quotient_hi, uint64_t *quotient_lo, uint64_t *remainder_hi, uint64_t *remainder_lo)
{
typedef boost::multiprecision::uint128_t uint128_t;
uint128_t dividend = dividend_hi;
dividend <<= 64;
dividend |= dividend_lo;
uint128_t q, r;
divide_qr(dividend, uint128_t(divisor), q, r);
*quotient_hi = ((q >> 64) & 0xffffffffffffffffull).convert_to<uint64_t>();
*quotient_lo = (q & 0xffffffffffffffffull).convert_to<uint64_t>();
if (remainder_hi)
*remainder_hi = ((r >> 64) & 0xffffffffffffffffull).convert_to<uint64_t>();
if (remainder_lo)
*remainder_lo = (r & 0xffffffffffffffffull).convert_to<uint64_t>();
}

View File

@@ -1,128 +0,0 @@
// Copyright (c) 2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "net/levin_base.h"
#include "int-util.h"
namespace epee
{
namespace levin
{
bucket_head2 make_header(uint32_t command, uint64_t msg_size, uint32_t flags, bool expect_response) noexcept
{
bucket_head2 head = {0};
head.m_signature = SWAP64LE(LEVIN_SIGNATURE);
head.m_have_to_return_data = expect_response;
head.m_cb = SWAP64LE(msg_size);
head.m_command = SWAP32LE(command);
head.m_protocol_version = SWAP32LE(LEVIN_PROTOCOL_VER_1);
head.m_flags = SWAP32LE(flags);
return head;
}
byte_slice make_notify(int command, epee::span<const std::uint8_t> payload)
{
const bucket_head2 head = make_header(command, payload.size(), LEVIN_PACKET_REQUEST, false);
return byte_slice{epee::as_byte_span(head), payload};
}
byte_slice make_noise_notify(const std::size_t noise_bytes)
{
static constexpr const std::uint32_t flags =
LEVIN_PACKET_BEGIN | LEVIN_PACKET_END;
if (noise_bytes < sizeof(bucket_head2))
return nullptr;
std::string buffer(noise_bytes, char(0));
const bucket_head2 head = make_header(0, noise_bytes - sizeof(bucket_head2), flags, false);
std::memcpy(std::addressof(buffer[0]), std::addressof(head), sizeof(head));
return byte_slice{std::move(buffer)};
}
byte_slice make_fragmented_notify(const byte_slice& noise_message, int command, epee::span<const std::uint8_t> payload)
{
const size_t noise_size = noise_message.size();
if (noise_size < sizeof(bucket_head2) * 2)
return nullptr;
if (payload.size() <= noise_size - sizeof(bucket_head2))
{
/* The entire message can be sent at once, and the levin binary parser
will ignore extra bytes. So just pad with zeroes and otherwise send
a "normal", not fragmented message. */
const size_t padding = noise_size - sizeof(bucket_head2) - payload.size();
const span<const uint8_t> padding_bytes{noise_message.end() - padding, padding};
const bucket_head2 head = make_header(command, noise_size - sizeof(bucket_head2), LEVIN_PACKET_REQUEST, false);
return byte_slice{as_byte_span(head), payload, padding_bytes};
}
// fragment message
const size_t payload_space = noise_size - sizeof(bucket_head2);
const size_t expected_fragments = ((payload.size() - 2) / payload_space) + 1;
std::string buffer{};
buffer.reserve((expected_fragments + 1) * noise_size); // +1 here overselects for internal bucket_head2 value
bucket_head2 head = make_header(0, noise_size - sizeof(bucket_head2), LEVIN_PACKET_BEGIN, false);
buffer.append(reinterpret_cast<const char*>(&head), sizeof(head));
head.m_command = command;
head.m_flags = LEVIN_PACKET_REQUEST;
head.m_cb = payload.size();
buffer.append(reinterpret_cast<const char*>(&head), sizeof(head));
size_t copy_size = payload.remove_prefix(payload_space - sizeof(bucket_head2));
buffer.append(reinterpret_cast<const char*>(payload.data()) - copy_size, copy_size);
head.m_command = 0;
head.m_flags = 0;
head.m_cb = noise_size - sizeof(bucket_head2);
while (!payload.empty())
{
copy_size = payload.remove_prefix(payload_space);
if (payload.empty())
head.m_flags = LEVIN_PACKET_END;
buffer.append(reinterpret_cast<const char*>(&head), sizeof(head));
buffer.append(reinterpret_cast<const char*>(payload.data()) - copy_size, copy_size);
}
const size_t padding = noise_size - copy_size - sizeof(bucket_head2);
buffer.append(reinterpret_cast<const char*>(noise_message.end()) - padding, padding);
return byte_slice{std::move(buffer)};
}
} // levin
} // epee

View File

@@ -100,7 +100,7 @@ static const char *get_default_categories(int level)
switch (level)
{
case 0:
categories = "*:WARNING,net:FATAL,net.http:FATAL,net.ssl:FATAL,net.p2p:FATAL,net.cn:FATAL,global:INFO,verify:FATAL,serialization:FATAL,daemon.rpc.payment:ERROR,stacktrace:INFO,logging:INFO,msgwriter:INFO";
categories = "*:WARNING,net:FATAL,net.http:FATAL,net.ssl:FATAL,net.p2p:FATAL,net.cn:FATAL,global:INFO,verify:FATAL,serialization:FATAL,stacktrace:INFO,logging:INFO,msgwriter:INFO";
break;
case 1:
categories = "*:INFO,global:INFO,stacktrace:INFO,logging:INFO,msgwriter:INFO,perf.*:DEBUG";
@@ -109,7 +109,7 @@ static const char *get_default_categories(int level)
categories = "*:DEBUG";
break;
case 3:
categories = "*:TRACE,*.dump:DEBUG";
categories = "*:TRACE";
break;
case 4:
categories = "*:TRACE";
@@ -472,54 +472,4 @@ void reset_console_color() {
}
static bool mlog(el::Level level, const char *category, const char *format, va_list ap) noexcept
{
int size = 0;
char *p = NULL;
va_list apc;
bool ret = true;
/* Determine required size */
va_copy(apc, ap);
size = vsnprintf(p, size, format, apc);
va_end(apc);
if (size < 0)
return false;
size++; /* For '\0' */
p = (char*)malloc(size);
if (p == NULL)
return false;
size = vsnprintf(p, size, format, ap);
if (size < 0)
{
free(p);
return false;
}
try
{
MCLOG(level, category, el::Color::Default, p);
}
catch(...)
{
ret = false;
}
free(p);
return ret;
}
#define DEFLOG(fun,lev) \
bool m##fun(const char *category, const char *fmt, ...) { va_list ap; va_start(ap, fmt); bool ret = mlog(el::Level::lev, category, fmt, ap); va_end(ap); return ret; }
DEFLOG(error, Error)
DEFLOG(warning, Warning)
DEFLOG(info, Info)
DEFLOG(debug, Debug)
DEFLOG(trace, Trace)
#undef DEFLOG
#endif //_MLOG_H_

View File

@@ -11,39 +11,10 @@ namespace net_utils
//////////////////////////////////////////////////////////////////////////
boost::asio::ip::tcp::resolver resolver(GET_IO_SERVICE(timeout));
boost::asio::ip::tcp::resolver::query query(boost::asio::ip::tcp::v4(), addr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
bool try_ipv6 = false;
boost::asio::ip::tcp::resolver::iterator iterator;
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::resolver::iterator end;
boost::system::error_code resolve_error;
try
{
iterator = resolver.resolve(query, resolve_error);
if(iterator == end) // Documentation states that successful call is guaranteed to be non-empty
{
// if IPv4 resolution fails, try IPv6. Unintentional outgoing IPv6 connections should only
// be possible if for some reason a hostname was given and that hostname fails IPv4 resolution,
// so at least for now there should not be a need for a flag "using ipv6 is ok"
try_ipv6 = true;
}
}
catch (const boost::system::system_error& e)
{
if (resolve_error != boost::asio::error::host_not_found &&
resolve_error != boost::asio::error::host_not_found_try_again)
{
throw;
}
try_ipv6 = true;
}
if (try_ipv6)
{
boost::asio::ip::tcp::resolver::query query6(boost::asio::ip::tcp::v6(), addr, port, boost::asio::ip::tcp::resolver::query::canonical_name);
iterator = resolver.resolve(query6);
if (iterator == end)
throw boost::system::system_error{boost::asio::error::fault, "Failed to resolve " + addr};
}
if(iterator == end) // Documentation states that successful call is guaranteed to be non-empty
throw boost::system::system_error{boost::asio::error::fault, "Failed to resolve " + addr};
//////////////////////////////////////////////////////////////////////////

View File

@@ -27,13 +27,10 @@
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <string.h>
#include <thread>
#include <boost/asio/ssl.hpp>
#include <boost/lambda/lambda.hpp>
#include <openssl/ssl.h>
#include <openssl/pem.h>
#include "misc_log_ex.h"
#include "net/net_helper.h"
#include "net/net_ssl.h"
#undef MONERO_DEFAULT_LOG_CATEGORY
@@ -459,11 +456,7 @@ bool ssl_options_t::has_fingerprint(boost::asio::ssl::verify_context &ctx) const
return false;
}
bool ssl_options_t::handshake(
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> &socket,
boost::asio::ssl::stream_base::handshake_type type,
const std::string& host,
std::chrono::milliseconds timeout) const
bool ssl_options_t::handshake(boost::asio::ssl::stream<boost::asio::ip::tcp::socket> &socket, boost::asio::ssl::stream_base::handshake_type type, const std::string& host) const
{
socket.next_layer().set_option(boost::asio::ip::tcp::no_delay(true));
@@ -509,30 +502,8 @@ bool ssl_options_t::handshake(
});
}
auto& io_service = GET_IO_SERVICE(socket);
boost::asio::steady_timer deadline(io_service, timeout);
deadline.async_wait([&socket](const boost::system::error_code& error) {
if (error != boost::asio::error::operation_aborted)
{
socket.next_layer().close();
}
});
boost::system::error_code ec = boost::asio::error::would_block;
socket.async_handshake(type, boost::lambda::var(ec) = boost::lambda::_1);
if (io_service.stopped())
{
io_service.reset();
}
while (ec == boost::asio::error::would_block && !io_service.stopped())
{
// should poll_one(), can't run_one() because it can block if there is
// another worker thread executing io_service's tasks
// TODO: once we get Boost 1.66+, replace with run_one_for/run_until
std::this_thread::sleep_for(std::chrono::milliseconds(30));
io_service.poll_one();
}
boost::system::error_code ec;
socket.handshake(type, ec);
if (ec)
{
MERROR("SSL handshake failed, connection dropped: " << ec.message());

View File

@@ -21,37 +21,6 @@ namespace epee { namespace net_utils
bool ipv4_network_address::is_loopback() const { return net_utils::is_ip_loopback(ip()); }
bool ipv4_network_address::is_local() const { return net_utils::is_ip_local(ip()); }
bool ipv6_network_address::equal(const ipv6_network_address& other) const noexcept
{ return is_same_host(other) && port() == other.port(); }
bool ipv6_network_address::less(const ipv6_network_address& other) const noexcept
{ return is_same_host(other) ? port() < other.port() : m_address < other.m_address; }
std::string ipv6_network_address::str() const
{ return std::string("[") + host_str() + "]:" + std::to_string(port()); }
std::string ipv6_network_address::host_str() const { return m_address.to_string(); }
bool ipv6_network_address::is_loopback() const { return m_address.is_loopback(); }
bool ipv6_network_address::is_local() const { return m_address.is_link_local(); }
bool ipv4_network_subnet::equal(const ipv4_network_subnet& other) const noexcept
{ return is_same_host(other) && m_mask == other.m_mask; }
bool ipv4_network_subnet::less(const ipv4_network_subnet& other) const noexcept
{ return subnet() < other.subnet() ? true : (other.subnet() < subnet() ? false : (m_mask < other.m_mask)); }
std::string ipv4_network_subnet::str() const
{ return string_tools::get_ip_string_from_int32(subnet()) + "/" + std::to_string(m_mask); }
std::string ipv4_network_subnet::host_str() const { return string_tools::get_ip_string_from_int32(subnet()) + "/" + std::to_string(m_mask); }
bool ipv4_network_subnet::is_loopback() const { return net_utils::is_ip_loopback(subnet()); }
bool ipv4_network_subnet::is_local() const { return net_utils::is_ip_local(subnet()); }
bool ipv4_network_subnet::matches(const ipv4_network_address &address) const
{
return (address.ip() & ~(0xffffffffull << m_mask)) == subnet();
}
bool network_address::equal(const network_address& other) const
{

View File

@@ -71,11 +71,6 @@ rdln::linestatus rdln::readline_buffer::get_line(std::string& line) const
{
boost::lock_guard<boost::mutex> lock(sync_mutex);
line_stat = rdln::partial;
if (!m_cout_buf)
{
line = "";
return rdln::full;
}
rl_callback_read_char();
if (line_stat == rdln::full)
{
@@ -229,8 +224,3 @@ static void remove_line_handler()
rl_callback_handler_remove();
}
void rdln::clear_screen()
{
rl_clear_screen(0, 0);
}

View File

@@ -26,16 +26,12 @@ Preparing the Gitian builder host
The first step is to prepare the host environment that will be used to perform the Gitian builds.
This guide explains how to set up the environment, and how to start the builds.
* Gitian host OS should be Ubuntu 18.04 "Bionic Beaver". If you are on a mac or windows for example, you can run it in a VM but will be slower.
* Gitian gives you the option of using any of 3 different virtualization tools: `kvm`, `docker` or `lxc`. This documentation will only show how to build with `lxc` and `docker` (documentation for `kvm` is welcome). Building with `lxc` is the default, but is more complicated, so we recommend docker your first time.
## Create the gitianuser account
You need to create a new user called `gitianuser` and be logged in as that user. The user needs `sudo` access.
Gitian offers to build with either `kvm`, `docker` or `lxc`. The default build
path chosen is `lxc`, but its setup is more complicated. You need to be logged in as the `gitianuser`.
If this user does not exist yet on your system, create it. Gitian can use
either kvm, lxc or docker as a host environment. This documentation will show
how to build with lxc and docker. While the docker setup is easy, the lxc setup
is more involved.
LXC
---
@@ -80,34 +76,18 @@ This setup is required to enable networking in the container.
Docker
------
Prepare for building with docker:
Building in docker does not require much setup. Install docker on your host, then type the following:
```bash
sudo apt-get install git make curl docker.io
```
Consider adding `gitianuser` to the `docker` group after reading about [the security implications](https://docs.docker.com/v17.09/engine/installation/linux/linux-postinstall/):
```bash
sudo groupadd docker
sudo apt-get install git make curl
sudo usermod -aG docker gitianuser
```
Optionally add yourself to the docker group. Note that this will give docker root access to your system.
```bash
sudo usermod -aG docker gitianuser
```
Manual Building
Manual and Building
-------------------
The instructions below use the automated script [gitian-build.py](gitian-build.py) which only works in Ubuntu.
=======
The script automatically installs some packages with apt. If you are not running it on a debian-like system, pass `--no-apt` along with the other
arguments to it. It calls all available .yml descriptors, which in turn pass the build configurations for different platforms to gitian.
The instructions below use the automated script [gitian-build.py](gitian-build.py) which is tested to work on Ubuntu.
It calls all available .yml descriptors, which in turn pass the build configurations for different platforms to gitian.
Help for the build steps taken can be accessed with `./gitian-build.py --help`.
@@ -120,95 +100,66 @@ The `gitian-build.py` script will checkout different release tags, so it's best
cp monero/contrib/gitian/gitian-build.py .
```
### Setup the required environment
Setup for LXC:
Setup the required environment, you only need to do this once:
```bash
GH_USER=fluffypony
VERSION=v0.15.0.1
./gitian-build.py --setup $GH_USER $VERSION
./gitian-build.py --setup fluffypony v0.14.0
```
Where `GH_USER` is your Github user name and `VERSION` is the version tag you want to build.
Setup for docker:
Where `fluffypony` is your Github name and `v0.14.0` is the version tag you want to build.
If you are using docker, run it with:
```bash
./gitian-build.py --setup --docker $GH_USER $VERSION
./gitian-build.py --setup --docker fluffypony v0.14.0
```
While gitian and this build script does provide a way for you to sign the build directly, it is recommended to sign in a separate step. This script is only there for convenience. Separate steps for building can still be taken.
While gitian and this build script does provide a way for you to sign the build directly, it is recommended to sign in a seperate step.
This script is only there for convenience. Seperate steps for building can still be taken.
In order to sign gitian builds on your host machine, which has your PGP key,
fork the [gitian.sigs repository](https://github.com/monero-project/gitian.sigs) and clone it on your host machine,
fork the gitian.sigs repository and clone it on your host machine,
or pass the signed assert file back to your build machine.
```bash
git clone git@github.com:monero-project/gitian.sigs.git
git remote add $GH_USER git@github.com:$GH_USER/gitian.sigs.git
git remote add fluffypony git@github.com:fluffypony/gitian.sigs.git
```
Build the binaries
------------------
**Note:** if you intend to build MacOS binaries, please follow [these instructions](https://github.com/bitcoin-core/docs/blob/master/gitian-building/gitian-building-mac-os-sdk.md) to get the required SDK.
To build the most recent tag (pass in `--docker` if using docker):
Build Binaries
-----------------------------
To build the most recent tag (pass in `--docker` after setting up with docker):
```bash
./gitian-build.py --detach-sign --no-commit --build $GH_USER $VERSION
./gitian-build.py --detach-sign --no-commit -b fluffypony v0.14.0
```
To speed up the build, use `-j 5 --memory 5000` as the first arguments, where `5` is the number of CPU's you allocated to the VM plus one, and 5000 is a little bit less than then the MB's of RAM you allocated. If there is memory corruption on your machine, try to tweak these values.
To speed up the build, use `-j 5 -m 5000` as the first arguments, where `5` is the number of CPU's you allocated to the VM plus one, and 5000 is a little bit less than then the MB's of RAM you allocated. If there is memory corruption on your machine, try to tweak these values.
If all went well, this produces a number of (uncommitted) `.assert` files in the gitian.sigs directory.
If all went well, this produces a number of (uncommited) `.assert` files in the gitian.sigs repository.
Checking your work
------------------
Take a look in the assert files and note the SHA256 checksums listed there. eg for `v0.15.0.1` you should get this checksum:
```
2b95118f53d98d542a85f8732b84ba13b3cd20517ccb40332b0edd0ddf4f8c62 monero-x86_64-linux-gnu.tar.gz
```
You should verify that this is really the checksum you get on that file you built. You can also look in the gitian.sigs repo and / or [getmonero.org release checksums](https://web.getmonero.org/downloads/hashes.txt) to see if others got the same checksum for the same version tag. If there is ever a mismatch -- **STOP! Something is wrong**. Contact others on IRC / github to figure out what is going on.
Signing assert files
--------------------
If you chose to do detached signing using `--detach-sign` above (recommended), you need to copy these uncommitted changes to your host machine, then sign them using your gpg key like so:
If you do detached, offline signing, you need to copy these uncommited changes to your host machine, where you can sign them. For example:
```bash
GH_USER=fluffypony
VERSION=v0.15.0.1
gpg --detach-sign ${VERSION}-linux/${GH_USER}/monero-linux-*-build.assert
gpg --detach-sign ${VERSION}-win/${GH_USER}/monero-win-*-build.assert
gpg --detach-sign ${VERSION}-osx/${GH_USER}/monero-osx-*-build.assert
export NAME=fluffypony
export VERSION=v0.14.0
gpg --output $VERSION-linux/$NAME/monero-linux-$VERSION-build.assert.sig --detach-sign $VERSION-linux/$NAME/monero-linux-$VERSION-build.assert
gpg --output $VERSION-osx-unsigned/$NAME/monero-osx-$VERSION-build.assert.sig --detach-sign $VERSION-osx-unsigned/$NAME/monero-osx-$VERSION-build.assert
gpg --output $VERSION-win-unsigned/$NAME/monero-win-$VERSION-build.assert.sig --detach-sign $VERSION-win-unsigned/$NAME/monero-win-$VERSION-build.assert
```
<!-- TODO: Replace * above with ${VERSION} once gitian builds correct file name -->
This will create a `.sig` file for each `.assert` file above (2 files for each platform).
Submitting your signed assert files
-----------------------------------
Make a pull request (both the `.assert` and `.assert.sig` files) to the
[monero-project/gitian.sigs](https://github.com/monero-project/gitian.sigs/) repository:
```bash
git checkout -b $VERSION
# add your assert and sig files...
git commit -S -a -m "Add $GH_USER $VERSION"
git push --set-upstream $GH_USER $VERSION
git checkout -b v0.14.0
git commit -S -a -m "Add $NAME v0.14.0"
git push --set-upstream $NAME v0.14.0
```
**Note:** Please ensure your gpg public key is available to check signatures by adding it to the [gitian.sigs/gitian-pubkeys/](https://github.com/monero-project/gitian.sigs/tree/master/gitian-pubkeys) directory in a pull request.
```bash
gpg --detach-sign ${VERSION}-linux/${SIGNER}/monero-linux-*-build.assert
gpg --detach-sign ${VERSION}-win-unsigned/${SIGNER}/monero-win-*-build.assert
gpg --detach-sign ${VERSION}-osx-unsigned/${SIGNER}/monero-osx-*-build.assert
```
More Build Options
------------------
@@ -226,19 +177,3 @@ To get all build options run:
./gitian-build.py --help
```
Doing Successive Builds
-----------------------
If you need to do multiple iterations (while developing/testing) you can use the
`--rebuild` option instead of `--build` on subsequent iterations. This skips the
initial check for the freshness of the depends tools. In particular, doing this
check all the time prevents rebuilding when you have no network access.
Local-Only Builds
-----------------
If you need to run builds while disconnected from the internet, make sure you have
local up-to-date repos in advance. Then specify your local repo using the `--url`
option when building. This will avoid attempts to git pull across a network.

View File

@@ -1,136 +0,0 @@
---
name: "monero-android-0.15"
enable_cache: true
suites:
- "bionic"
architectures:
- "amd64"
packages:
- "curl"
- "gperf"
- "gcc-7"
- "g++-7"
- "gcc"
- "g++"
- "binutils-gold"
- "git"
- "pkg-config"
- "build-essential"
- "autoconf"
- "libtool"
- "automake"
- "faketime"
- "bsdmainutils"
- "ca-certificates"
- "python"
- "cmake"
- "ccache"
- "protobuf-compiler"
- "libdbus-1-dev"
- "libharfbuzz-dev"
- "libprotobuf-dev"
- "python3-zmq"
- "unzip"
remotes:
- "url": "https://github.com/monero-project/monero.git"
"dir": "monero"
files: []
script: |
WRAP_DIR=$HOME/wrapped
HOSTS="arm-linux-android aarch64-linux-android"
FAKETIME_HOST_PROGS="clang clang++ ar nm"
FAKETIME_PROGS="date"
HOST_CFLAGS="-O2 -g"
HOST_CXXFLAGS="-O2 -g"
HOST_LDFLAGS=-static-libstdc++
export GZIP="-9n"
export TZ="UTC"
export BUILD_DIR=`pwd`
mkdir -p ${WRAP_DIR}
if test -n "$GBUILD_CACHE_ENABLED"; then
export SOURCES_PATH=${GBUILD_COMMON_CACHE}
export BASE_CACHE=${GBUILD_PACKAGE_CACHE}
mkdir -p ${BASE_CACHE} ${SOURCES_PATH}
fi
export ZERO_AR_DATE=1
function create_global_faketime_wrappers {
for prog in ${FAKETIME_PROGS}; do
echo '#!/usr/bin/env bash' > ${WRAP_DIR}/${prog}
echo "REAL=\`which -a ${prog} | grep -v ${WRAP_DIR}/${prog} | head -1\`" >> ${WRAP_DIR}/${prog}
echo 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1' >> ${WRAP_DIR}/${prog}
echo "export FAKETIME=\"$1\"" >> ${WRAP_DIR}/${prog}
echo "\$REAL \$@" >> $WRAP_DIR/${prog}
chmod +x ${WRAP_DIR}/${prog}
done
}
function create_per-host_faketime_wrappers {
for i in $HOSTS; do
ABI=$i
if expr $i : arm- > /dev/null
then
ABI=$i"eabi"
fi
NDKDIR="${BUILD_DIR}/monero/contrib/depends/$i/native/bin"
for prog in ${FAKETIME_HOST_PROGS}; do
WRAPPER=${WRAP_DIR}/${ABI}-${prog}
echo '#!/usr/bin/env bash' > ${WRAPPER}
echo 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1' >> ${WRAPPER}
echo "export FAKETIME=\"$1\"" >> ${WRAPPER}
echo "$NDKDIR/${ABI}-$prog \$@" >> ${WRAPPER}
chmod +x ${WRAPPER}
done
done
}
# Faketime for depends so intermediate results are comparable
export PATH_orig=${PATH}
create_global_faketime_wrappers "2000-01-01 12:00:00"
create_per-host_faketime_wrappers "2000-01-01 12:00:00"
export PATH=${WRAP_DIR}:${PATH}
# gcc 7+ honors SOURCE_DATE_EPOCH, no faketime needed
export SOURCE_DATE_EPOCH=`date -d 2000-01-01T12:00:00 +%s`
git config --global core.abbrev 9
cd monero
# Set the version string that gets added to the tar archive name
version="`git describe`"
if [[ $version == *"-"*"-"* ]]; then
version="`git rev-parse --short=9 HEAD`"
version="`echo $version | head -c 9`"
fi
BASEPREFIX=`pwd`/contrib/depends
# Build dependencies for each host
export TAR_OPTIONS=--mtime=2000-01-01T12:00:00
for i in $HOSTS; do
make ${MAKEOPTS} -C ${BASEPREFIX} HOST="${i}"
done
# Faketime for binaries
export PATH=${PATH_orig}
create_global_faketime_wrappers "${REFERENCE_DATETIME}"
create_per-host_faketime_wrappers "${REFERENCE_DATETIME}"
# Build in a new dir for each host
export TAR_OPTIONS=--mtime=${REFERENCE_DATE}T${REFERENCE_TIME}
for i in ${HOSTS}; do
export PATH=${WRAP_DIR}:${BASEPREFIX}/${i}/native/bin:${PATH_orig}
mkdir build && cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=${BASEPREFIX}/${i}/share/toolchain.cmake -DCMAKE_BUILD_TYPE=Release
make ${MAKEOPTS}
chmod 755 bin/*
cp ../LICENSE bin
chmod 644 bin/LICENSE
DISTNAME=monero-${i}-${version}
mv bin ${DISTNAME}
find ${DISTNAME}/ | sort | tar --no-recursion --owner=0 --group=0 -c -T - | bzip2 -9 > ${OUTDIR}/${DISTNAME}.tar.bz2
cd ..
rm -rf build
done

View File

@@ -5,39 +5,33 @@ import os
import subprocess
import sys
gsigs = 'https://github.com/monero-project/gitian.sigs.git'
gbrepo = 'https://github.com/devrandom/gitian-builder.git'
platforms = {'l': ['Linux', 'linux', 'tar.bz2'],
'a': ['Android', 'android', 'tar.bz2'],
'f': ['FreeBSD', 'freebsd', 'tar.bz2'],
'w': ['Windows', 'win', 'zip'],
'm': ['MacOS', 'osx', 'tar.bz2'] }
def setup():
global args, workdir
programs = ['apt-cacher-ng', 'ruby', 'git', 'make', 'wget']
programs = ['ruby', 'git', 'apt-cacher-ng', 'make', 'wget']
if args.kvm:
programs += ['python-vm-builder', 'qemu-kvm', 'qemu-utils']
elif args.docker:
dockers = ['docker.io', 'docker-ce']
for i in dockers:
return_code = subprocess.call(['sudo', 'apt-get', 'install', '-qq', i])
if return_code == 0:
break
if return_code != 0:
print('Cannot find any way to install docker', file=sys.stderr)
exit(1)
else:
programs += ['lxc', 'debootstrap']
if not args.no_apt:
subprocess.check_call(['sudo', 'apt-get', 'install', '-qq'] + programs)
if not os.path.isdir('sigs'):
subprocess.check_call(['git', 'clone', gsigs, 'sigs'])
if not os.path.isdir('builder'):
subprocess.check_call(['git', 'clone', gbrepo, 'builder'])
os.chdir('builder')
subprocess.check_call(['git', 'checkout', 'c0f77ca018cb5332bfd595e0aff0468f77542c23'])
os.makedirs('inputs', exist_ok=True)
os.chdir('inputs')
subprocess.check_call(['sudo', 'apt-get', 'install', '-qq'] + programs)
if not os.path.isdir('gitian.sigs'):
subprocess.check_call(['git', 'clone', 'https://github.com/monero-project/gitian.sigs.git'])
if not os.path.isdir('gitian-builder'):
subprocess.check_call(['git', 'clone', 'https://github.com/devrandom/gitian-builder.git'])
if not os.path.isdir('monero'):
subprocess.check_call(['git', 'clone', args.url, 'monero'])
os.chdir('..')
subprocess.check_call(['git', 'clone', 'https://github.com/monero-project/monero.git'])
os.chdir('gitian-builder')
subprocess.check_call(['git', 'checkout', '963322de8420c50502c4cc33d4d7c0d84437b576'])
make_image_prog = ['bin/make-base-vm', '--suite', 'bionic', '--arch', 'amd64']
if args.docker:
if not subprocess.call(['docker', '--help'], shell=False, stdout=subprocess.DEVNULL):
print("Please install docker first manually")
make_image_prog += ['--docker']
elif not args.kvm:
make_image_prog += ['--lxc']
@@ -46,105 +40,96 @@ def setup():
if args.is_bionic and not args.kvm and not args.docker:
subprocess.check_call(['sudo', 'sed', '-i', 's/lxcbr0/br0/', '/etc/default/lxc-net'])
print('Reboot is required')
sys.exit(0)
def rebuild():
global args, workdir
print('\nBuilding Dependencies\n')
os.makedirs('../out/' + args.version, exist_ok=True)
for i in args.os:
if i is 'm' and args.nomac:
continue
os_name = platforms[i][0]
tag_name = platforms[i][1]
suffix = platforms[i][2]
print('\nCompiling ' + args.version + ' ' + os_name)
infile = 'inputs/monero/contrib/gitian/gitian-' + tag_name + '.yml'
subprocess.check_call(['bin/gbuild', '-j', args.jobs, '-m', args.memory, '--commit', 'monero='+args.commit, '--url', 'monero='+args.url, infile])
subprocess.check_call(['bin/gsign', '-p', args.sign_prog, '--signer', args.signer, '--release', args.version+'-linux', '--destination', '../sigs/', infile])
subprocess.check_call('mv build/out/monero-*.' + suffix + ' ../out/'+args.version, shell=True)
print('Moving var/install.log to var/install-' + tag_name + '.log')
subprocess.check_call('mv var/install.log var/install-' + tag_name + '.log', shell=True)
print('Moving var/build.log to var/build-' + tag_name + '.log')
subprocess.check_call('mv var/build.log var/build-' + tag_name + '.log', shell=True)
os.chdir(workdir)
if args.commit_files:
print('\nCommitting '+args.version+' Unsigned Sigs\n')
os.chdir('sigs')
for i, v in platforms:
subprocess.check_call(['git', 'add', args.version+'-'+v[1]+'/'+args.signer])
subprocess.check_call(['git', 'commit', '-m', 'Add '+args.version+' unsigned sigs for '+args.signer])
os.chdir(workdir)
exit(0)
def build():
global args, workdir
print('\nChecking Depends Freshness\n')
os.chdir('builder')
os.makedirs('monero-binaries/' + args.version, exist_ok=True)
print('\nBuilding Dependencies\n')
os.chdir('gitian-builder')
os.makedirs('inputs', exist_ok=True)
subprocess.check_call(['wget', '-N', '-P', 'inputs', 'https://downloads.sourceforge.net/project/osslsigncode/osslsigncode/osslsigncode-1.7.1.tar.gz'])
subprocess.check_call(['wget', '-N', '-P', 'inputs', 'https://bitcoincore.org/cfields/osslsigncode-Backports-to-1.7.1.patch'])
subprocess.check_output(["echo 'a8c4e9cafba922f89de0df1f2152e7be286aba73f78505169bc351a7938dd911 inputs/osslsigncode-Backports-to-1.7.1.patch' | sha256sum -c"], shell=True)
subprocess.check_output(["echo 'f9a8cdb38b9c309326764ebc937cba1523a3a751a7ab05df3ecc99d18ae466c9 inputs/osslsigncode-1.7.1.tar.gz' | sha256sum -c"], shell=True)
subprocess.check_call(['make', '-C', 'inputs/monero/contrib/depends', 'download', 'SOURCES_PATH=' + os.getcwd() + '/cache/common'])
subprocess.check_call(['make', '-C', '../monero/contrib/depends', 'download', 'SOURCES_PATH=' + os.getcwd() + '/cache/common'])
rebuild()
if args.linux:
print('\nCompiling ' + args.version + ' Linux')
subprocess.check_call(['bin/gbuild', '-j', args.jobs, '-m', args.memory, '--commit', 'monero='+args.commit, '--url', 'monero='+args.url, '../monero/contrib/gitian/gitian-linux.yml'])
subprocess.check_call(['bin/gsign', '-p', args.sign_prog, '--signer', args.signer, '--release', args.version+'-linux', '--destination', '../gitian.sigs/', '../monero/contrib/gitian/gitian-linux.yml'])
subprocess.check_call('mv build/out/monero-*.tar.gz ../monero-binaries/'+args.version, shell=True)
if args.windows:
print('\nCompiling ' + args.version + ' Windows')
subprocess.check_call(['bin/gbuild', '-j', args.jobs, '-m', args.memory, '--commit', 'monero='+args.commit, '--url', 'monero='+args.url, '../monero/contrib/gitian/gitian-win.yml'])
subprocess.check_call(['bin/gsign', '-p', args.sign_prog, '--signer', args.signer, '--release', args.version+'-win', '--destination', '../gitian.sigs/', '../monero/contrib/gitian/gitian-win.yml'])
subprocess.check_call('mv build/out/monero*.zip ../monero-binaries/'+args.version, shell=True)
if args.macos:
print('\nCompiling ' + args.version + ' MacOS')
subprocess.check_call(['bin/gbuild', '-j', args.jobs, '-m', args.memory, '--commit', 'monero='+args.commit, '--url', 'monero'+args.url, '../monero/contrib/gitian/gitian-osx.yml'])
subprocess.check_call(['bin/gsign', '-p', args.sign_prog, '--signer', args.signer, '--release', args.version+'-osx', '--destination', '../gitian.sigs/', '../monero/contrib/gitian/gitian-osx.yml'])
subprocess.check_call('mv build/out/monero*.tar.gz ../monero-binaries/'+args.version, shell=True)
os.chdir(workdir)
if args.commit_files:
print('\nCommitting '+args.version+' Unsigned Sigs\n')
os.chdir('gitian.sigs')
subprocess.check_call(['git', 'add', args.version+'-linux/'+args.signer])
subprocess.check_call(['git', 'add', args.version+'-win/'+args.signer])
subprocess.check_call(['git', 'add', args.version+'-osx/'+args.signer])
subprocess.check_call(['git', 'commit', '-m', 'Add '+args.version+' unsigned sigs for '+args.signer])
os.chdir(workdir)
def verify():
global args, workdir
os.chdir('builder')
os.chdir('gitian-builder')
for i, v in platforms:
print('\nVerifying v'+args.version+' '+v[0]+'\n')
subprocess.check_call(['bin/gverify', '-v', '-d', '../sigs/', '-r', args.version+'-'+v[1], 'inputs/monero/contrib/gitian/gitian-'+v[1]+'.yml'])
print('\nVerifying v'+args.version+' Linux\n')
subprocess.check_call(['bin/gverify', '-v', '-d', '../gitian.sigs/', '-r', args.version+'-linux', '../monero/contrib/gitian/gitian-linux.yml'])
print('\nVerifying v'+args.version+' Windows\n')
subprocess.check_call(['bin/gverify', '-v', '-d', '../gitian.sigs/', '-r', args.version+'-win', '../monero/contrib/gitian/gitian-win.yml'])
print('\nVerifying v'+args.version+' MacOS\n')
subprocess.check_call(['bin/gverify', '-v', '-d', '../gitian.sigs/', '-r', args.version+'-osx', '../monero/contrib/gitian/gitian-osx.yml'])
os.chdir(workdir)
def main():
global args, workdir
parser = argparse.ArgumentParser(description='Script for running full Gitian builds.', usage='%(prog)s [options] signer version')
parser = argparse.ArgumentParser(usage='%(prog)s [options] signer version')
parser.add_argument('-c', '--commit', action='store_true', dest='commit', help='Indicate that the version argument is for a commit or branch')
parser.add_argument('-p', '--pull', action='store_true', dest='pull', help='Indicate that the version argument is the number of a github repository pull request')
parser.add_argument('-u', '--url', dest='url', default='https://github.com/monero-project/monero', help='Specify the URL of the repository. Default is %(default)s')
parser.add_argument('-v', '--verify', action='store_true', dest='verify', help='Verify the Gitian build')
parser.add_argument('-b', '--build', action='store_true', dest='build', help='Do a Gitian build')
parser.add_argument('-B', '--buildsign', action='store_true', dest='buildsign', help='Build both signed and unsigned binaries')
parser.add_argument('-o', '--os', dest='os', default='lafwm', help='Specify which Operating Systems the build is for. Default is %(default)s. l for Linux, a for Android, f for FreeBSD, w for Windows, m for MacOS')
parser.add_argument('-r', '--rebuild', action='store_true', dest='rebuild', help='Redo a Gitian build')
parser.add_argument('-R', '--rebuildsign', action='store_true', dest='rebuildsign', help='Redo and sign a Gitian build')
parser.add_argument('-o', '--os', dest='os', default='lwm', help='Specify which Operating Systems the build is for. Default is %(default)s. l for Linux, w for Windows, m for MacOS')
parser.add_argument('-j', '--jobs', dest='jobs', default='2', help='Number of processes to use. Default %(default)s')
parser.add_argument('-m', '--memory', dest='memory', default='2000', help='Memory to allocate in MiB. Default %(default)s')
parser.add_argument('-k', '--kvm', action='store_true', dest='kvm', help='Use KVM instead of LXC')
parser.add_argument('-d', '--docker', action='store_true', dest='docker', help='Use Docker instead of LXC')
parser.add_argument('-S', '--setup', action='store_true', dest='setup', help='Set up the Gitian building environment. Uses LXC. If you want to use KVM, use the --kvm option. If you run this script on a non-debian based system, pass the --no-apt flag')
parser.add_argument('-S', '--setup', action='store_true', dest='setup', help='Set up the Gitian building environment. Uses LXC. If you want to use KVM, use the --kvm option. Only works on Debian-based systems (Ubuntu, Debian)')
parser.add_argument('-D', '--detach-sign', action='store_true', dest='detach_sign', help='Create the assert file for detached signing. Will not commit anything.')
parser.add_argument('-n', '--no-commit', action='store_false', dest='commit_files', help='Do not commit anything to git')
parser.add_argument('signer', nargs='?', help='GPG signer to sign each build assert file')
parser.add_argument('version', nargs='?', help='Version number, commit, or branch to build.')
parser.add_argument('-a', '--no-apt', action='store_true', dest='no_apt', help='Indicate that apt is not installed on the system')
parser.add_argument('signer', help='GPG signer to sign each build assert file')
parser.add_argument('version', help='Version number, commit, or branch to build.')
args = parser.parse_args()
workdir = os.getcwd()
args.linux = 'l' in args.os
args.windows = 'w' in args.os
args.macos = 'm' in args.os
args.is_bionic = b'bionic' in subprocess.check_output(['lsb_release', '-cs'])
if args.buildsign:
args.build = True
args.sign = True
if args.rebuildsign:
args.rebuild = True
args.sign = True
args.build=True
args.sign=True
if args.kvm and args.docker:
raise Exception('Error: cannot have both kvm and docker')
@@ -161,23 +146,21 @@ def main():
if not 'LXC_GUEST_IP' in os.environ.keys():
os.environ['LXC_GUEST_IP'] = '10.0.3.5'
# Disable MacOS build if no SDK found
args.nomac = False
if 'm' in args.os and not os.path.isfile('builder/inputs/MacOSX10.11.sdk.tar.gz'):
if args.build:
print('Cannot build for MacOS, SDK does not exist. Will build for other OSes')
args.nomac = True
# Disable for MacOS if no SDK found
if args.macos and not os.path.isfile('gitian-builder/inputs/MacOSX10.11.sdk.tar.gz'):
print('Cannot build for MacOS, SDK does not exist. Will build for other OSes')
args.macos = False
script_name = os.path.basename(sys.argv[0])
# Signer and version shouldn't be empty
if args.signer == '':
print(script_name+': Missing signer.')
print('Try '+script_name+' --help for more information')
sys.exit(1)
exit(1)
if args.version == '':
print(script_name+': Missing version.')
print('Try '+script_name+' --help for more information')
sys.exit(1)
exit(1)
# Add leading 'v' for tags
if args.commit and args.pull:
@@ -187,9 +170,10 @@ def main():
if args.setup:
setup()
os.makedirs('builder/inputs/monero', exist_ok=True)
os.chdir('builder/inputs/monero')
os.chdir('monero')
if args.pull:
subprocess.check_call(['git', 'fetch', args.url, 'refs/pull/'+args.version+'/merge'])
os.chdir('../gitian-builder/inputs/monero')
subprocess.check_call(['git', 'fetch', args.url, 'refs/pull/'+args.version+'/merge'])
args.commit = subprocess.check_output(['git', 'show', '-s', '--format=%H', 'FETCH_HEAD'], universal_newlines=True).strip()
args.version = 'pull-' + args.version
@@ -201,10 +185,6 @@ def main():
if args.build:
build()
if args.rebuild:
os.chdir('builder')
rebuild()
if args.verify:
verify()

View File

@@ -1,133 +0,0 @@
---
name: "monero-freebsd-0.15"
enable_cache: true
suites:
- "bionic"
architectures:
- "amd64"
packages:
- "curl"
- "clang-8"
- "gperf"
- "gcc-7"
- "g++-7"
- "gcc"
- "g++"
- "binutils-gold"
- "git"
- "pkg-config"
- "build-essential"
- "autoconf"
- "libtool"
- "automake"
- "faketime"
- "bsdmainutils"
- "ca-certificates"
- "python"
- "cmake"
- "ccache"
- "protobuf-compiler"
- "libdbus-1-dev"
- "libharfbuzz-dev"
- "libprotobuf-dev"
- "python3-zmq"
remotes:
- "url": "https://github.com/monero-project/monero.git"
"dir": "monero"
files: []
script: |
WRAP_DIR=$HOME/wrapped
HOSTS="x86_64-unknown-freebsd"
FAKETIME_HOST_PROGS=""
FAKETIME_PROGS="clang-8 clang++-8 date"
HOST_CFLAGS="-O2 -g"
HOST_CXXFLAGS="-O2 -g"
HOST_LDFLAGS=-static-libstdc++
export GZIP="-9n"
export TZ="UTC"
export BUILD_DIR=`pwd`
mkdir -p ${WRAP_DIR}
if test -n "$GBUILD_CACHE_ENABLED"; then
export SOURCES_PATH=${GBUILD_COMMON_CACHE}
export BASE_CACHE=${GBUILD_PACKAGE_CACHE}
mkdir -p ${BASE_CACHE} ${SOURCES_PATH}
fi
export ZERO_AR_DATE=1
function create_global_faketime_wrappers {
for prog in ${FAKETIME_PROGS}; do
echo '#!/usr/bin/env bash' > ${WRAP_DIR}/${prog}
echo "REAL=\`which -a ${prog} | grep -v ${WRAP_DIR}/${prog} | head -1\`" >> ${WRAP_DIR}/${prog}
echo 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1' >> ${WRAP_DIR}/${prog}
echo "export FAKETIME=\"$1\"" >> ${WRAP_DIR}/${prog}
echo "\$REAL \$@" >> $WRAP_DIR/${prog}
chmod +x ${WRAP_DIR}/${prog}
done
}
function create_per-host_faketime_wrappers {
for i in $HOSTS; do
for prog in ${FAKETIME_HOST_PROGS}; do
WRAPPER=${WRAP_DIR}/${i}-${prog}
echo '#!/usr/bin/env bash' > ${WRAPPER}
echo "REAL=\`which -a ${i}-${prog} | grep -v ${WRAP_DIR}/${i}-${prog} | head -1\`" >> ${WRAPPER}
echo 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1' >> ${WRAPPER}
echo "export FAKETIME=\"$1\"" >> ${WRAPPER}
echo "$NDKDIR/${ABI}-$prog \$@" >> ${WRAPPER}
chmod +x ${WRAPPER}
done
done
}
# Faketime for depends so intermediate results are comparable
export PATH_orig=${PATH}
create_global_faketime_wrappers "2000-01-01 12:00:00"
create_per-host_faketime_wrappers "2000-01-01 12:00:00"
export PATH=${WRAP_DIR}:${PATH}
# gcc 7+ honors SOURCE_DATE_EPOCH, no faketime needed
export SOURCE_DATE_EPOCH=`date -d 2000-01-01T12:00:00 +%s`
git config --global core.abbrev 9
cd monero
# Set the version string that gets added to the tar archive name
version="`git describe`"
if [[ $version == *"-"*"-"* ]]; then
version="`git rev-parse --short=9 HEAD`"
version="`echo $version | head -c 9`"
fi
BASEPREFIX=`pwd`/contrib/depends
# Build dependencies for each host
export TAR_OPTIONS=--mtime=2000-01-01T12:00:00
for i in $HOSTS; do
make ${MAKEOPTS} -C ${BASEPREFIX} HOST="${i}"
done
# Faketime for binaries
export PATH=${PATH_orig}
create_global_faketime_wrappers "${REFERENCE_DATETIME}"
create_per-host_faketime_wrappers "${REFERENCE_DATETIME}"
ORIGPATH="$PATH"
# Build in a new dir for each host
export SOURCE_DATE_EPOCH=`date -d ${REFERENCE_DATE}T${REFERENCE_TIME} +%s`
export TAR_OPTIONS=--mtime=${REFERENCE_DATE}T${REFERENCE_TIME}
for i in ${HOSTS}; do
export PATH=${WRAP_DIR}:${BASEPREFIX}/${i}/native/bin:${ORIGPATH}
mkdir build && cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=${BASEPREFIX}/${i}/share/toolchain.cmake -DCMAKE_BUILD_TYPE=Release
make ${MAKEOPTS}
chmod 755 bin/*
cp ../LICENSE bin
chmod 644 bin/LICENSE
DISTNAME=monero-${i}-${version}
mv bin ${DISTNAME}
find ${DISTNAME}/ | sort | tar --no-recursion --owner=0 --group=0 -c -T - | bzip2 -9 > ${OUTDIR}/${DISTNAME}.tar.bz2
cd ..
rm -rf build
done

View File

@@ -1,5 +1,5 @@
---
name: "monero-linux-0.15"
name: "monero-linux-0.14"
enable_cache: true
suites:
- "bionic"
@@ -50,13 +50,14 @@ script: |
WRAP_DIR=$HOME/wrapped
HOSTS="x86_64-linux-gnu arm-linux-gnueabihf aarch64-linux-gnu i686-linux-gnu"
FAKETIME_HOST_PROGS=""
FAKETIME_PROGS="date"
FAKETIME_HOST_PROGS="gcc g++"
FAKETIME_PROGS="date ar ranlib nm"
HOST_CFLAGS="-O2 -g"
HOST_CXXFLAGS="-O2 -g"
HOST_LDFLAGS=-static-libstdc++
export GZIP="-9n"
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export TZ="UTC"
export BUILD_DIR=`pwd`
mkdir -p ${WRAP_DIR}
@@ -104,38 +105,41 @@ script: |
# x86 needs /usr/include/i386-linux-gnu/asm pointed to /usr/include/x86_64-linux-gnu/asm,
# but we can't write there. Instead, create a link here and force it to be included in the
# search paths.
# This problem goes away if linux-libc-dev:i386 pkg exists, but it's not in bionic.
# search paths by wrapping gcc/g++.
mkdir -p $EXTRA_INCLUDES_BASE/i686-linux-gnu
rm -f $WRAP_DIR/extra_includes/i686-linux-gnu/asm
ln -s /usr/include/x86_64-linux-gnu/asm $EXTRA_INCLUDES_BASE/i686-linux-gnu/asm
mkdir -p $EXTRA_INCLUDES_BASE/i686-pc-linux-gnu
rm -f $WRAP_DIR/extra_includes/i686-pc-linux-gnu/asm
ln -s /usr/include/x86_64-linux-gnu/asm $EXTRA_INCLUDES_BASE/i686-pc-linux-gnu/asm
# gcc 7+ honors SOURCE_DATE_EPOCH, no faketime needed
export SOURCE_DATE_EPOCH=`date -d 2000-01-01T12:00:00 +%s`
for prog in gcc g++; do
rm -f ${WRAP_DIR}/${prog}
cat << EOF > ${WRAP_DIR}/${prog}
#!/usr/bin/env bash
REAL="`which -a ${prog}-7 | grep -v ${WRAP_DIR}/${prog} | head -1`"
for var in "\$@"
do
if [ "\$var" = "-m32" ]; then
export C_INCLUDE_PATH="$EXTRA_INCLUDES_BASE/i686-pc-linux-gnu"
export CPLUS_INCLUDE_PATH="$EXTRA_INCLUDES_BASE/i686-pc-linux-gnu"
break
fi
done
\$REAL \$@
EOF
chmod +x ${WRAP_DIR}/${prog}
done
git config --global core.abbrev 9
cd monero
# Set the version string that gets added to the tar archive name
version="`git describe`"
if [[ $version == *"-"*"-"* ]]; then
version="`git rev-parse --short=9 HEAD`"
version="`echo $version | head -c 9`"
fi
BASEPREFIX=`pwd`/contrib/depends
# Build dependencies for each host
export TAR_OPTIONS=--mtime=2000-01-01T12:00:00
for i in $HOSTS; do
EXTRA_INCLUDES="$EXTRA_INCLUDES_BASE/$i"
if [ -d "$EXTRA_INCLUDES" ]; then
export C_INCLUDE_PATH="$EXTRA_INCLUDES"
export CPLUS_INCLUDE_PATH="$EXTRA_INCLUDES"
else
unset C_INCLUDE_PATH
unset CPLUS_INCLUDE_PATH
export HOST_ID_SALT="$EXTRA_INCLUDES"
fi
make ${MAKEOPTS} -C ${BASEPREFIX} HOST="${i}" V=1
unset HOST_ID_SALT
done
# Faketime for binaries
@@ -146,28 +150,14 @@ script: |
ORIGPATH="$PATH"
# Build in a new dir for each host
export SOURCE_DATE_EPOCH=`date -d ${REFERENCE_DATE}T${REFERENCE_TIME} +%s`
export TAR_OPTIONS=--mtime=${REFERENCE_DATE}T${REFERENCE_TIME}
for i in ${HOSTS}; do
export PATH=${BASEPREFIX}/${i}/native/bin:${ORIGPATH}
mkdir build && cd build
EXTRA_INCLUDES="$EXTRA_INCLUDES_BASE/$i"
if [ -d "$EXTRA_INCLUDES" ]; then
export C_INCLUDE_PATH="$EXTRA_INCLUDES"
export CPLUS_INCLUDE_PATH="$EXTRA_INCLUDES"
else
unset C_INCLUDE_PATH
unset CPLUS_INCLUDE_PATH
fi
cmake .. -DCMAKE_TOOLCHAIN_FILE=${BASEPREFIX}/${i}/share/toolchain.cmake -DBACKCOMPAT=ON
make ${MAKEOPTS}
chmod 755 bin/*
cp ../LICENSE bin
chmod 644 bin/LICENSE
DISTNAME=monero-${i}-${version}
DISTNAME=monero-${i}
mv bin ${DISTNAME}
find ${DISTNAME}/ | sort | tar --no-recursion --owner=0 --group=0 -c -T - | bzip2 -9 > ${OUTDIR}/${DISTNAME}.tar.bz2
find ${DISTNAME}/ | sort | tar --no-recursion --mode='u+rw,go+r-w,a+X' --owner=0 --group=0 -c -T - | gzip -9n > ${OUTDIR}/${DISTNAME}.tar.gz
cd ..
rm -rf build
done

View File

@@ -1,5 +1,5 @@
---
name: "monero-osx-0.15"
name: "monero-osx-0.14"
enable_cache: true
suites:
- "bionic"
@@ -32,9 +32,10 @@ script: |
WRAP_DIR=$HOME/wrapped
HOSTS="x86_64-apple-darwin11"
FAKETIME_HOST_PROGS=""
FAKETIME_PROGS="ar ranlib date dmg genisoimage python"
FAKETIME_PROGS="ar ranlib date dmg genisoimage"
export GZIP="-9n"
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export TZ="UTC"
export BUILD_DIR=`pwd`
mkdir -p ${WRAP_DIR}
@@ -78,20 +79,12 @@ script: |
git config --global core.abbrev 9
cd monero
# Set the version string that gets added to the tar archive name
version="`git describe`"
if [[ $version == *"-"*"-"* ]]; then
version="`git rev-parse --short=9 HEAD`"
version="`echo $version | head -c 9`"
fi
BASEPREFIX=`pwd`/contrib/depends
mkdir -p ${BASEPREFIX}/SDKs
tar -C ${BASEPREFIX}/SDKs -xf ${BUILD_DIR}/MacOSX10.11.sdk.tar.gz
# Build dependencies for each host
export TAR_OPTIONS=--mtime=2000-01-01T12:00:00
for i in $HOSTS; do
make ${MAKEOPTS} -C ${BASEPREFIX} HOST="${i}"
done
@@ -104,18 +97,14 @@ script: |
ORIGPATH="$PATH"
# Build in a new dir for each host
export TAR_OPTIONS=--mtime=${REFERENCE_DATE}T${REFERENCE_TIME}
for i in ${HOSTS}; do
export PATH=${BASEPREFIX}/${i}/native/bin:${ORIGPATH}
mkdir build && cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=${BASEPREFIX}/${i}/share/toolchain.cmake
make ${MAKEOPTS}
chmod 755 bin/*
cp ../LICENSE bin
chmod 644 bin/LICENSE
DISTNAME=monero-${i}-${version}
DISTNAME=monero-${i}
mv bin ${DISTNAME}
find ${DISTNAME}/ | sort | tar --no-recursion --owner=0 --group=0 -c -T - | bzip2 -9 > ${OUTDIR}/${DISTNAME}.tar.bz2
find ${DISTNAME}/ | sort | tar --no-recursion --mode='u+rw,go+r-w,a+X' --owner=0 --group=0 -c -T - | gzip -9n > ${OUTDIR}/${DISTNAME}.tar.gz
cd ..
rm -rf build
done

View File

@@ -1,5 +1,5 @@
---
name: "monero-win-0.15"
name: "monero-win-0.14"
enable_cache: true
suites:
- "bionic"
@@ -22,19 +22,6 @@ packages:
- "python"
- "rename"
- "cmake"
alternatives:
-
package: "i686-w64-mingw32-g++"
path: "/usr/bin/i686-w64-mingw32-g++-posix"
-
package: "i686-w64-mingw32-gcc"
path: "/usr/bin/i686-w64-mingw32-gcc-posix"
-
package: "x86_64-w64-mingw32-g++"
path: "/usr/bin/x86_64-w64-mingw32-g++-posix"
-
package: "x86_64-w64-mingw32-gcc"
path: "/usr/bin/x86_64-w64-mingw32-gcc-posix"
remotes:
- "url": "https://github.com/monero-project/monero.git"
"dir": "monero"
@@ -42,12 +29,13 @@ files: []
script: |
WRAP_DIR=$HOME/wrapped
HOSTS="i686-w64-mingw32 x86_64-w64-mingw32"
FAKETIME_HOST_PROGS="windres objcopy"
FAKETIME_HOST_PROGS="ar ranlib nm windres strip objcopy"
FAKETIME_PROGS="date zip"
HOST_CFLAGS="-O2 -g"
HOST_CXXFLAGS="-O2 -g"
export GZIP="-9n"
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export TZ="UTC"
export BUILD_DIR=`pwd`
mkdir -p ${WRAP_DIR}
@@ -81,27 +69,41 @@ script: |
done
}
function create_per-host_linker_wrapper {
# This is only needed for trusty, as the mingw linker leaks a few bytes of
# heap, causing non-determinism. See discussion in https://github.com/bitcoin/bitcoin/pull/6900
for i in $HOSTS; do
mkdir -p ${WRAP_DIR}/${i}
for prog in collect2; do
echo '#!/usr/bin/env bash' > ${WRAP_DIR}/${i}/${prog}
REAL=$(${i}-gcc -print-prog-name=${prog})
echo "export MALLOC_PERTURB_=255" >> ${WRAP_DIR}/${i}/${prog}
echo "${REAL} \$@" >> $WRAP_DIR/${i}/${prog}
chmod +x ${WRAP_DIR}/${i}/${prog}
done
for prog in gcc g++; do
echo '#!/usr/bin/env bash' > ${WRAP_DIR}/${i}-${prog}
echo "REAL=\`which -a ${i}-${prog}-posix | grep -v ${WRAP_DIR}/${i}-${prog} | head -1\`" >> ${WRAP_DIR}/${i}-${prog}
echo 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1' >> ${WRAP_DIR}/${i}-${prog}
echo "export FAKETIME=\"$1\"" >> ${WRAP_DIR}/${i}-${prog}
echo "export COMPILER_PATH=${WRAP_DIR}/${i}" >> ${WRAP_DIR}/${i}-${prog}
echo "\$REAL \$@" >> $WRAP_DIR/${i}-${prog}
chmod +x ${WRAP_DIR}/${i}-${prog}
done
done
}
# Faketime for depends so intermediate results are comparable
export PATH_orig=${PATH}
create_global_faketime_wrappers "2000-01-01 12:00:00"
create_per-host_faketime_wrappers "2000-01-01 12:00:00"
create_per-host_linker_wrapper "2000-01-01 12:00:00"
export PATH=${WRAP_DIR}:${PATH}
# gcc 7+ honors SOURCE_DATE_EPOCH, no faketime needed
export SOURCE_DATE_EPOCH=`date -d 2000-01-01T12:00:00 +%s`
git config --global core.abbrev 9
cd monero
# Set the version string that gets added to the tar archive name
version="`git describe`"
if [[ $version == *"-"*"-"* ]]; then
version="`git rev-parse --short=9 HEAD`"
version="`echo $version | head -c 9`"
fi
BASEPREFIX=`pwd`/contrib/depends
# Build dependencies for each host
export TAR_OPTIONS=--mtime=2000-01-01T12:00:00
for i in $HOSTS; do
EXTRA_INCLUDES="$EXTRA_INCLUDES_BASE/$i"
if [ -d "$EXTRA_INCLUDES" ]; then
@@ -120,15 +122,12 @@ script: |
ORIGPATH="$PATH"
# Run cmake and make, for each create a new build/ directory,
# compile from there, archive, export and delete the archive again
export SOURCE_DATE_EPOCH=`date -d ${REFERENCE_DATE}T${REFERENCE_TIME} +%s`
export TAR_OPTIONS=--mtime=${REFERENCE_DATE}T${REFERENCE_TIME}
for i in ${HOSTS}; do
export PATH=${BASEPREFIX}/${i}/native/bin:${ORIGPATH}
mkdir build && cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=${BASEPREFIX}/${i}/share/toolchain.cmake
make ${MAKEOPTS}
cp ../LICENSE bin
DISTNAME=monero-${i}-${version}
DISTNAME=monero-${i}
mv bin ${DISTNAME}
find ${DISTNAME}/ | sort | zip -X@ ${OUTDIR}/${DISTNAME}.zip
cd .. && rm -rf build

Some files were not shown because too many files have changed in this diff Show More