aio_overpass
Async client for the Overpass API.
Usage
There are three basic steps to fetch the spatial data you need:
Formulate a query
- Either write your own custom query, f.e.
Query("node(5369192667); out;")
, - or use one of the
Query
subclasses, f.e.SingleRouteQuery(relation_id=1643324)
.
- Either write your own custom query, f.e.
Call the Overpass API
- Prepare your client with
client = Client(user_agent=...)
. - Use
await client.run_query(query)
to fetch the result set.
- Prepare your client with
Collect results
- Either access the raw result dictionaries with
query.result_set
, - or use a collector, f.e.
collect_elements(query)
to get a list of typedElements
. - Collectors are often specific to queries -
collect_routes
requires aRouteQuery
, for instance.
- Either access the raw result dictionaries with
Example: looking up a building in Hamburg
a) Results as Dictionaries
You may use the .result_set
property to get a list of all query results
without any extra processing:
from aio_overpass import Client, Query
query = Query('way["addr:housename"=Elbphilharmonie]; out geom;')
client = Client()
await client.run_query(query)
query.result_set
[
{
"type": "way",
"id": 24981342,
# ...
"tags": {
"addr:city": "Hamburg",
"addr:country": "DE",
"addr:housename": "Elbphilharmonie",
# ...
},
}
]
b) Results as Objects
This will give you a user-friendly Python interface
for nodes,
ways,
and relations.
Here we use the .tags
property:
from aio_overpass.element import collect_elements
elems = collect_elements(query)
elems[0].tags
{
"addr:city": "Hamburg",
"addr:country": "DE",
"addr:housename": "Elbphilharmonie",
# ...
}
c) Results as GeoJSON
The processed elements can also easily be converted to GeoJSON:
import json
json.dumps(elems[0].geojson, indent=4)
{
"type": "Feature",
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
9.9832434,
53.5415472
],
...
]
]
},
"properties": {
"id": 24981342,
"type": "way",
"tags": {
"addr:city": "Hamburg",
"addr:country": "DE",
"addr:housename": "Elbphilharmonie",
...
},
...
},
"bbox": [
9.9832434,
53.540877,
9.9849674
53.5416212,
]
}
Choosing Extras
This library can be installed with a number of optional extras.
Install no extras, if you're fine with
dict
result sets.Install the
shapely
extra, if you would like the convenience of typed OSM elements. It is also useful if you are interested in elements' geometries, and either already use Shapely, or want a simple way to export GeoJSON.- This includes the
pt
module to make it easier to interact with public transportation routes. Something seemingly trivial like listing the stops of a route can have unexpected pitfalls, since stops can have multiple route members, and may have a range of different tags and roles. This submodule will clean up the relation data for you.
- This includes the
Install the
networkx
extra to enable thept_ordered
module, if you want a route's path as a simple line from A to B. It is hard to do this consistently, mainly because ways are not always ordered, and stop positions might be missing. You can benefit from this submodule if you wish to- render a route's path between any two stops
- measure the route's travelled distance between any two stops
- validate the order of ways in the relation
- check if the route relation has gaps
Install the
joblib
extra to speed uppt_ordered.collect_ordered_routes()
, which can benefit greatly from parallelization.
Coordinates
- Geographic point locations are expressed by latitude (
lat
) and longitude (lon
) coordinates.- Latitude is given as an angle that ranges from –90° at the south pole to 90° at the north pole, with 0° at the Equator.
- Longitude is given as an angle ranging from 0° at the Prime Meridian (the line that divides the globe into Eastern and Western hemispheres), to +180° eastward and −180° westward.
lat/lon
values arefloats
that are exactly those degrees, just without the ° sign.
- This might help you remember which coordinate is which:
- If you think of a world map, usually it’s a rectangle.
- The long side (the largest side) is the longitude.
- Longitude is the x-axis, and latitude is the y-axis.
- Be wary of coordinate order:
- OpenStreetMap uses the WGS84 spatial reference system used by the Global Positioning System (GPS).
- OpenStreetMap node coordinates have seven decimal places, which gives them centimetric precision. However, the position accuracy of GPS data is only about 10m. A reasonable display accuracy could be five places, which is precise to 1.1 metres at the equator.
- Spatial features that cross the 180th meridian are
problematic,
since you go from longitude
180.0
to-180.0
. Such features usually have their geometries split up, like the area of Russia.
1""" 2Async client for the Overpass API. 3 4[Release Notes](https://github.com/timwie/aio-overpass/blob/main/RELEASES.md) 5 6[Examples](https://github.com/timwie/aio-overpass/tree/main/examples) 7""" 8 9import importlib.metadata 10from pathlib import Path 11 12 13__version__: str = importlib.metadata.version("aio-overpass") 14 15# we add this to all modules for pdoc; 16# see https://pdoc.dev/docs/pdoc.html#use-numpydoc-or-google-docstrings 17__docformat__ = "google" 18 19# we also use __all__ in all modules for pdoc; this lets us control the order 20__all__ = ( 21 "__version__", 22 "Client", 23 "ClientError", 24 "Query", 25 "client", 26 "element", 27 "error", 28 "pt", 29 "pt_ordered", 30 "ql", 31 "query", 32 "spatial", 33) 34 35from .client import Client 36from .error import ClientError 37from .query import Query 38 39 40# extend the module's docstring 41for filename in ("usage.md", "extras.md", "coordinates.md"): 42 __doc__ += "\n<br>\n" 43 __doc__ += (Path(__file__).parent / "doc" / filename).read_text()
87class Client: 88 """ 89 A client for the Overpass API. 90 91 Requests are rate-limited according to the configured number of slots per IP for the specified 92 API server. By default, queries are retried whenever the server is too busy, or the rate limit 93 was exceeded. Custom query runners can be used to implement your own retry strategy. 94 95 Args: 96 url: The url of an Overpass API instance. Defaults to the main Overpass API instance. 97 user_agent: A string used for the User-Agent header. It is good practice to provide a string 98 that identifies your application, and includes a way to contact you (f.e. an 99 e-mail, or a link to a repository). This is important if you make too many 100 requests, or queries that require a lot of resources. 101 concurrency: The maximum number of simultaneous connections. In practice the amount 102 of concurrent queries may be limited by the number of slots it provides for 103 each IP. 104 status_timeout_secs: If set, status requests to the Overpass API will time out after 105 this duration in seconds. Defaults to no timeout. 106 runner: You can provide another query runner if you want to implement your own retry 107 strategy. 108 109 References: 110 - https://wiki.openstreetmap.org/wiki/Overpass_API#Public_Overpass_API_instances 111 """ 112 113 __slots__ = ( 114 "_concurrency", 115 "_maybe_session", 116 "_runner", 117 "_status_timeout_secs", 118 "_url", 119 "_user_agent", 120 ) 121 122 def __init__( 123 self, 124 url: str = DEFAULT_INSTANCE, 125 user_agent: str = DEFAULT_USER_AGENT, 126 concurrency: int = 32, 127 status_timeout_secs: float | None = None, 128 runner: QueryRunner | None = None, 129 ) -> None: 130 if concurrency <= 0: 131 msg = "'concurrency' must be > 0" 132 raise ValueError(msg) 133 if status_timeout_secs is not None and ( 134 not math.isfinite(status_timeout_secs) or status_timeout_secs <= 0.0 135 ): 136 msg = "'status_timeout_secs' must be finite > 0" 137 raise ValueError(msg) 138 139 self._url: Final[str] = url 140 self._user_agent: Final[str] = user_agent 141 self._concurrency: Final[int] = concurrency 142 self._status_timeout_secs: Final[float | None] = status_timeout_secs 143 self._runner: Final[QueryRunner] = runner or DefaultQueryRunner() 144 145 self._maybe_session: aiohttp.ClientSession | None = None 146 147 def _session(self) -> aiohttp.ClientSession: 148 """The session used for all requests of this client.""" 149 if not self._maybe_session or self._maybe_session.closed: 150 headers = {"User-Agent": self._user_agent} 151 connector = aiohttp.TCPConnector(limit=self._concurrency) 152 self._maybe_session = aiohttp.ClientSession(headers=headers, connector=connector) 153 154 return self._maybe_session 155 156 async def close(self) -> None: 157 """Cancel all running queries and close the underlying session.""" 158 if self._maybe_session and not self._maybe_session.closed: 159 # do not care if this fails 160 with suppress(CallError): 161 _ = await self.cancel_queries() 162 163 # is raised when there are still active queries. that's ok 164 with suppress(aiohttp.ServerDisconnectedError): 165 await self._maybe_session.close() 166 167 async def _status(self, timeout: ClientTimeout | None = None) -> "Status": 168 endpoint = urljoin(self._url, "status") 169 timeout = timeout or aiohttp.ClientTimeout(total=self._status_timeout_secs) 170 async with ( 171 _map_request_error(timeout), 172 self._session().get(url=endpoint, timeout=timeout) as response, 173 ): 174 return await _parse_status(response) 175 176 async def status(self) -> Status: 177 """ 178 Check the current API status. 179 180 The timeout of this request is configured with the ``status_timeout_secs`` argument. 181 182 Raises: 183 ClientError: if the status could not be looked up 184 """ 185 return await self._status() 186 187 async def cancel_queries(self, timeout_secs: float | None = None) -> int: 188 """ 189 Cancel all running queries. 190 191 This can be used to terminate runaway queries that prevent you from sending new ones. 192 193 Returns: 194 the number of terminated queries 195 196 Raises: 197 ClientError: if the request to cancel queries failed 198 """ 199 if timeout_secs is not None and (not math.isfinite(timeout_secs) or timeout_secs <= 0.0): 200 msg = "'timeout_secs' must be finite > 0" 201 raise ValueError(msg) 202 203 timeout = aiohttp.ClientTimeout(total=timeout_secs) if timeout_secs else None 204 headers = {"User-Agent": self._user_agent} 205 endpoint = urljoin(self._url, "kill_my_queries") 206 207 # use a new session here to get around our concurrency limit 208 async with ( 209 aiohttp.ClientSession(headers=headers) as session, 210 _map_request_error(timeout), 211 session.get(endpoint, timeout=timeout) as response, 212 ): 213 body = await response.text() 214 killed_pids = re.findall("\\(pid (\\d+)\\)", body) 215 return len(set(killed_pids)) 216 217 async def run_query(self, query: Query, *, raise_on_failure: bool = True) -> None: 218 """ 219 Send a query to the API, and await its completion. 220 221 "Running" the query entails acquiring a connection from the pool, the query requests 222 themselves (which may be retried), status requests when the server is busy, 223 and cooldown periods. 224 225 The query runner is invoked before every try, and once after the last try. 226 227 To run multiple queries concurrently, wrap the returned coroutines in an ``asyncio`` task, 228 f.e. with ``asyncio.create_task()`` and subsequent ``asyncio.gather()``. 229 230 Args: 231 query: the query to run on this API instance 232 raise_on_failure: if ``True``, raises ``query.error`` if the query failed 233 234 Raises: 235 ClientError: when query or status requests fail. If the query was retried, the error 236 of the last try will be raised. The same exception is also captured in 237 ``query.error``. Raising can be prevented by setting ``raise_on_failure`` 238 to ``False``. 239 RunnerError: when a call to the query runner raises. This exception is raised 240 even if ``raise_on_failure` is ``False``, since it is likely an error 241 that is not just specific to this query. 242 AlreadyRunningError: when another ``run_query()`` call on this query has not finished 243 yet. This is not affected by ``raise_on_failure``. 244 """ 245 if query.done: 246 return # nothing to do 247 248 if not query._run_lock.acquire(blocking=False): 249 raise AlreadyRunningError(kwargs=query.kwargs) 250 251 try: 252 if query.nb_tries > 0: 253 query.reset() # reset failed queries 254 255 # query runner is invoked before every try, and once after the last try 256 while True: 257 await self._invoke_runner(query, raise_on_failure=raise_on_failure) 258 if query.done: 259 return 260 await self._try_query_once(query) 261 finally: 262 query._run_lock.release() 263 264 async def _invoke_runner(self, query: Query, *, raise_on_failure: bool) -> None: 265 """ 266 Invoke the query runner. 267 268 Raises: 269 ClientError: if the runner raises ``query.error`` 270 ValueError: if the runner raises a different ``ClientError`` than ``query.error`` 271 RunnerError: if the runner raises any other exception (which it shouldn't) 272 """ 273 try: 274 await self._runner(query) 275 except ClientError as err: 276 if err is not query.error: 277 msg = "query runner raised a ClientError other than 'query.error'" 278 raise ValueError(msg) from err 279 if raise_on_failure: 280 raise 281 except AssertionError: 282 raise 283 except BaseException as err: 284 raise RunnerError(cause=err) from err 285 286 async def _try_query_once(self, query: Query) -> None: 287 """A single iteration of running a query.""" 288 query._begin_try() 289 290 try: 291 await self._cooldown(query) 292 293 req_timeout = _next_query_req_timeout(query) 294 295 # pick the timeout we will use for the next try 296 # TODO: not sure if this should also update the timeout setting in the Query state; 297 # for now, pass it as parameter to the _code() function 298 next_timeout_secs = _next_timeout_secs(query) 299 300 data = query._code(next_timeout_secs) 301 302 query._begin_request() 303 304 query.logger.info(f"call api for {query}") 305 306 async with ( 307 _map_request_error(req_timeout), 308 self._session().post( 309 url=urljoin(self._url, "interpreter"), 310 data=data, 311 timeout=req_timeout, 312 ) as response, 313 ): 314 query._succeed_try( 315 response=await _result_or_raise(response, query.kwargs, query.logger), 316 response_bytes=response.content.total_bytes, 317 ) 318 319 except CallTimeoutError as err: 320 fail_with: ClientError = err 321 if query.run_timeout_elapsed: 322 assert query.run_duration_secs is not None 323 fail_with = GiveupError( 324 cause=GiveupCause.RUN_TIMEOUT_DURING_QUERY_CALL, 325 kwargs=query.kwargs, 326 after_secs=query.run_duration_secs, 327 ) 328 query._fail_try(fail_with) 329 330 except ClientError as err: 331 query._fail_try(err) 332 333 finally: 334 query._end_try() 335 336 async def _cooldown(self, query: Query) -> None: 337 """ 338 If the given query failed with ``TOO_MANY_QUERIES``, check for a cooldown period. 339 340 Raises: 341 ClientError: if the status request to find out the cooldown period fails 342 GiveupError: if the cooldown is longer than the remaining run duration 343 """ 344 logger = query.logger 345 346 if not is_too_many_queries(query.error): 347 return 348 349 # If this client is running too many queries, we can check the status for a 350 # cooldown period. This request failing is a bit of an edge case. 351 # 'query.error' will be overwritten, which means we will not check for a 352 # cooldown in the next iteration. 353 status = await self._status(timeout=self._next_status_req_timeout(query)) 354 355 if not status.cooldown_secs: 356 return 357 358 run_duration = query.run_duration_secs 359 assert run_duration is not None 360 361 if run_timeout := query.run_timeout_secs: 362 remaining = run_timeout - run_duration 363 364 if status.cooldown_secs > remaining: 365 logger.error(f"give up on {query} due to {status.cooldown_secs:.1f}s cooldown") 366 raise GiveupError( 367 cause=GiveupCause.RUN_TIMEOUT_BY_COOLDOWN, 368 kwargs=query.kwargs, 369 after_secs=run_duration, 370 ) 371 372 logger.info(f"{query} has cooldown for {status.cooldown_secs:.1f}s") 373 await sleep(status.cooldown_secs) 374 375 def _next_status_req_timeout(self, query: Query) -> aiohttp.ClientTimeout: 376 """Status request timeout; possibly limited by either the run or status timeout settings.""" 377 remaining = None 378 379 run_duration = query.run_duration_secs 380 assert run_duration is not None 381 382 if run_timeout := query.run_timeout_secs: 383 remaining = run_timeout - run_duration 384 385 if remaining <= 0.0: 386 raise GiveupError( 387 cause=GiveupCause.RUN_TIMEOUT_BEFORE_STATUS_CALL, 388 kwargs=query.kwargs, 389 after_secs=run_duration, 390 ) 391 392 if self._status_timeout_secs: 393 remaining = min(remaining, self._status_timeout_secs) # cap timeout if configured 394 395 return aiohttp.ClientTimeout(total=remaining)
A client for the Overpass API.
Requests are rate-limited according to the configured number of slots per IP for the specified API server. By default, queries are retried whenever the server is too busy, or the rate limit was exceeded. Custom query runners can be used to implement your own retry strategy.
Arguments:
- url: The url of an Overpass API instance. Defaults to the main Overpass API instance.
- user_agent: A string used for the User-Agent header. It is good practice to provide a string that identifies your application, and includes a way to contact you (f.e. an e-mail, or a link to a repository). This is important if you make too many requests, or queries that require a lot of resources.
- concurrency: The maximum number of simultaneous connections. In practice the amount of concurrent queries may be limited by the number of slots it provides for each IP.
- status_timeout_secs: If set, status requests to the Overpass API will time out after this duration in seconds. Defaults to no timeout.
- runner: You can provide another query runner if you want to implement your own retry strategy.
References:
122 def __init__( 123 self, 124 url: str = DEFAULT_INSTANCE, 125 user_agent: str = DEFAULT_USER_AGENT, 126 concurrency: int = 32, 127 status_timeout_secs: float | None = None, 128 runner: QueryRunner | None = None, 129 ) -> None: 130 if concurrency <= 0: 131 msg = "'concurrency' must be > 0" 132 raise ValueError(msg) 133 if status_timeout_secs is not None and ( 134 not math.isfinite(status_timeout_secs) or status_timeout_secs <= 0.0 135 ): 136 msg = "'status_timeout_secs' must be finite > 0" 137 raise ValueError(msg) 138 139 self._url: Final[str] = url 140 self._user_agent: Final[str] = user_agent 141 self._concurrency: Final[int] = concurrency 142 self._status_timeout_secs: Final[float | None] = status_timeout_secs 143 self._runner: Final[QueryRunner] = runner or DefaultQueryRunner() 144 145 self._maybe_session: aiohttp.ClientSession | None = None
156 async def close(self) -> None: 157 """Cancel all running queries and close the underlying session.""" 158 if self._maybe_session and not self._maybe_session.closed: 159 # do not care if this fails 160 with suppress(CallError): 161 _ = await self.cancel_queries() 162 163 # is raised when there are still active queries. that's ok 164 with suppress(aiohttp.ServerDisconnectedError): 165 await self._maybe_session.close()
Cancel all running queries and close the underlying session.
176 async def status(self) -> Status: 177 """ 178 Check the current API status. 179 180 The timeout of this request is configured with the ``status_timeout_secs`` argument. 181 182 Raises: 183 ClientError: if the status could not be looked up 184 """ 185 return await self._status()
Check the current API status.
The timeout of this request is configured with the status_timeout_secs
argument.
Raises:
- ClientError: if the status could not be looked up
187 async def cancel_queries(self, timeout_secs: float | None = None) -> int: 188 """ 189 Cancel all running queries. 190 191 This can be used to terminate runaway queries that prevent you from sending new ones. 192 193 Returns: 194 the number of terminated queries 195 196 Raises: 197 ClientError: if the request to cancel queries failed 198 """ 199 if timeout_secs is not None and (not math.isfinite(timeout_secs) or timeout_secs <= 0.0): 200 msg = "'timeout_secs' must be finite > 0" 201 raise ValueError(msg) 202 203 timeout = aiohttp.ClientTimeout(total=timeout_secs) if timeout_secs else None 204 headers = {"User-Agent": self._user_agent} 205 endpoint = urljoin(self._url, "kill_my_queries") 206 207 # use a new session here to get around our concurrency limit 208 async with ( 209 aiohttp.ClientSession(headers=headers) as session, 210 _map_request_error(timeout), 211 session.get(endpoint, timeout=timeout) as response, 212 ): 213 body = await response.text() 214 killed_pids = re.findall("\\(pid (\\d+)\\)", body) 215 return len(set(killed_pids))
Cancel all running queries.
This can be used to terminate runaway queries that prevent you from sending new ones.
Returns:
the number of terminated queries
Raises:
- ClientError: if the request to cancel queries failed
217 async def run_query(self, query: Query, *, raise_on_failure: bool = True) -> None: 218 """ 219 Send a query to the API, and await its completion. 220 221 "Running" the query entails acquiring a connection from the pool, the query requests 222 themselves (which may be retried), status requests when the server is busy, 223 and cooldown periods. 224 225 The query runner is invoked before every try, and once after the last try. 226 227 To run multiple queries concurrently, wrap the returned coroutines in an ``asyncio`` task, 228 f.e. with ``asyncio.create_task()`` and subsequent ``asyncio.gather()``. 229 230 Args: 231 query: the query to run on this API instance 232 raise_on_failure: if ``True``, raises ``query.error`` if the query failed 233 234 Raises: 235 ClientError: when query or status requests fail. If the query was retried, the error 236 of the last try will be raised. The same exception is also captured in 237 ``query.error``. Raising can be prevented by setting ``raise_on_failure`` 238 to ``False``. 239 RunnerError: when a call to the query runner raises. This exception is raised 240 even if ``raise_on_failure` is ``False``, since it is likely an error 241 that is not just specific to this query. 242 AlreadyRunningError: when another ``run_query()`` call on this query has not finished 243 yet. This is not affected by ``raise_on_failure``. 244 """ 245 if query.done: 246 return # nothing to do 247 248 if not query._run_lock.acquire(blocking=False): 249 raise AlreadyRunningError(kwargs=query.kwargs) 250 251 try: 252 if query.nb_tries > 0: 253 query.reset() # reset failed queries 254 255 # query runner is invoked before every try, and once after the last try 256 while True: 257 await self._invoke_runner(query, raise_on_failure=raise_on_failure) 258 if query.done: 259 return 260 await self._try_query_once(query) 261 finally: 262 query._run_lock.release()
Send a query to the API, and await its completion.
"Running" the query entails acquiring a connection from the pool, the query requests themselves (which may be retried), status requests when the server is busy, and cooldown periods.
The query runner is invoked before every try, and once after the last try.
To run multiple queries concurrently, wrap the returned coroutines in an asyncio
task,
f.e. with asyncio.create_task()
and subsequent asyncio.gather()
.
Arguments:
- query: the query to run on this API instance
- raise_on_failure: if
True
, raisesquery.error
if the query failed
Raises:
- ClientError: when query or status requests fail. If the query was retried, the error
of the last try will be raised. The same exception is also captured in
query.error
. Raising can be prevented by settingraise_on_failure
toFalse
. - RunnerError: when a call to the query runner raises. This exception is raised
even if
raise_on_failure` is
False``, since it is likely an error that is not just specific to this query. - AlreadyRunningError: when another
run_query()
call on this query has not finished yet. This is not affected byraise_on_failure
.
77class ClientError(Exception): 78 """Base exception for failed Overpass API requests and queries.""" 79 80 @property 81 def should_retry(self) -> bool: 82 """Returns ``True`` if it's worth retrying when encountering this error.""" 83 return False
Base exception for failed Overpass API requests and queries.
60class Query: 61 """ 62 State of a query that is either pending, running, successful, or failed. 63 64 Args: 65 input_code: The input Overpass QL code. Note that some settings might be changed 66 by query runners, notably the 'timeout' and 'maxsize' settings. 67 logger: The logger to use for all logging output related to this query. 68 **kwargs: Additional keyword arguments that can be used to identify queries. 69 70 References: 71 - https://wiki.openstreetmap.org/wiki/Overpass_API/Overpass_QL 72 """ 73 74 __slots__ = ( 75 "_error", 76 "_input_code", 77 "_kwargs", 78 "_logger", 79 "_max_timed_out_after_secs", 80 "_nb_tries", 81 "_request_timeout", 82 "_response", 83 "_response_bytes", 84 "_run_lock", 85 "_run_timeout_secs", 86 "_settings", 87 "_time_end_try", 88 "_time_start", 89 "_time_start_req", 90 "_time_start_try", 91 ) 92 93 def __init__( 94 self, 95 input_code: str, 96 logger: logging.Logger = _NULL_LOGGER, 97 **kwargs: Any, # noqa: ANN401 98 ) -> None: 99 self._run_lock: Final[threading.Lock] = threading.Lock() 100 """a lock used to ensure a query cannot be run more than once at the same time""" 101 102 self._input_code: Final[str] = input_code 103 """the original given overpass ql code""" 104 105 self._logger: Final[logging.Logger] = logger 106 """logger to use for this query""" 107 108 self._kwargs: Final[dict] = kwargs 109 """used to identify this query""" 110 111 self._settings = dict(_SETTING_PATTERN.findall(input_code)) 112 """all overpass ql settings [k:v];""" 113 114 if "out" in self._settings and self._settings["out"] != "json": 115 msg = "the '[out:*]' setting is implicitly set to 'json' and should be omitted" 116 raise ValueError(msg) 117 118 self._settings["out"] = "json" 119 120 if "maxsize" not in self._settings: 121 self._settings["maxsize"] = DEFAULT_MAXSIZE_MIB * 1024 * 1024 122 elif not self._settings["maxsize"].isdigit() or int(self._settings["maxsize"]) <= 0: 123 msg = "the '[maxsize:*]' setting must be an integer > 0" 124 raise ValueError(msg) 125 126 if "timeout" not in self._settings: 127 self._settings["timeout"] = DEFAULT_TIMEOUT_SECS 128 elif not self._settings["timeout"].isdigit() or int(self._settings["timeout"]) <= 0: 129 msg = "the '[timeout:*]' setting must be an integer > 0" 130 raise ValueError(msg) 131 132 self._run_timeout_secs: float | None = None 133 """total time limit for running this query""" 134 135 self._request_timeout: RequestTimeout = RequestTimeout() 136 """config for request timeouts""" 137 138 self._error: ClientError | None = None 139 """error of the last try, or None""" 140 141 self._response: dict | None = None 142 """response JSON as a dict, or None""" 143 144 self._response_bytes = 0.0 145 """number of bytes in a response, or zero""" 146 147 self._nb_tries = 0 148 """number of tries so far, starting at zero""" 149 150 self._time_start: Instant | None = None 151 """time prior to executing the first try""" 152 153 self._time_start_try: Instant | None = None 154 """time prior to executing the most recent try""" 155 156 self._time_start_req: Instant | None = None 157 """time prior to executing the most recent try's query request""" 158 159 self._time_end_try: Instant | None = None 160 """time the most recent try finished""" 161 162 self._max_timed_out_after_secs: int | None = None 163 """maximum of seconds after which the query was cancelled""" 164 165 def reset(self) -> None: 166 """Reset the query to its initial state, ignoring previous tries.""" 167 Query.__init__( 168 self, 169 input_code=self._input_code, 170 logger=self._logger, 171 **self._kwargs, 172 ) 173 174 @property 175 def input_code(self) -> str: 176 """The original input Overpass QL source code.""" 177 return self._input_code 178 179 @property 180 def kwargs(self) -> dict: 181 """ 182 Keyword arguments that can be used to identify queries. 183 184 The default query runner will log these values when a query is run. 185 """ 186 return self._kwargs 187 188 @property 189 def logger(self) -> logging.Logger: 190 """The logger used for logging output related to this query.""" 191 return self._logger 192 193 @property 194 def nb_tries(self) -> int: 195 """Current number of tries.""" 196 return self._nb_tries 197 198 @property 199 def error(self) -> ClientError | None: 200 """ 201 Error of the most recent try. 202 203 Returns: 204 an error or ``None`` if the query wasn't tried or hasn't failed 205 """ 206 return self._error 207 208 @property 209 def response(self) -> dict | None: 210 """ 211 The entire JSON response of the query. 212 213 Returns: 214 the response, or ``None`` if the query has not successfully finished (yet) 215 """ 216 return self._response 217 218 @property 219 def was_cached(self) -> bool | None: 220 """ 221 Indicates whether the query result was cached. 222 223 Returns: 224 ``None`` if the query has not been run yet. 225 ``True`` if the query has a result set with zero tries. 226 ``False`` otherwise. 227 """ 228 if self._response is None: 229 return None 230 return self._nb_tries == 0 231 232 @property 233 def result_set(self) -> list[dict] | None: 234 """ 235 The result set of the query. 236 237 This is open data, licensed under the Open Data Commons Open Database License (ODbL). 238 You are free to copy, distribute, transmit and adapt this data, as long as you credit 239 OpenStreetMap and its contributors. If you alter or build upon this data, you may 240 distribute the result only under the same licence. 241 242 Returns: 243 the elements of the result set, or ``None`` if the query has not successfully 244 finished (yet) 245 246 References: 247 - https://www.openstreetmap.org/copyright 248 - https://opendatacommons.org/licenses/odbl/1-0/ 249 """ 250 if not self._response: 251 return None 252 return self._response["elements"] 253 254 @property 255 def response_size_mib(self) -> float | None: 256 """ 257 The size of the response in mebibytes. 258 259 Returns: 260 the size, or ``None`` if the query has not successfully finished (yet) 261 """ 262 if self._response is None: 263 return None 264 return self._response_bytes / 1024.0 / 1024.0 265 266 @property 267 def maxsize_mib(self) -> float: 268 """ 269 The current value of the [maxsize:*] setting in mebibytes. 270 271 This size indicates the maximum allowed memory for the query in bytes RAM on the server, 272 as expected by the user. If the query needs more RAM than this value, the server may abort 273 the query with a memory exhaustion. The higher this size, the more probably the server 274 rejects the query before executing it. 275 """ 276 return float(self._settings["maxsize"]) // 1024.0 // 1024.0 277 278 @maxsize_mib.setter 279 def maxsize_mib(self, value: float) -> None: 280 if not math.isfinite(value) or value <= 0.0: 281 msg = "'maxsize_mib' must be finite > 0" 282 raise ValueError(msg) 283 self._settings["maxsize"] = int(value * 1024.0 * 1024.0) 284 285 @property 286 def timeout_secs(self) -> int: 287 """ 288 The current value of the [timeout:*] setting in seconds. 289 290 This duration is the maximum allowed runtime for the query in seconds, as expected by the 291 user. If the query runs longer than this time, the server may abort the query. The higher 292 this duration, the more probably the server rejects the query before executing it. 293 """ 294 return int(self._settings["timeout"]) 295 296 @timeout_secs.setter 297 def timeout_secs(self, value: int) -> None: 298 if value < 1: 299 msg = "timeout_secs must be >= 1" 300 raise ValueError(msg) 301 self._settings["timeout"] = value 302 303 @property 304 def run_timeout_secs(self) -> float | None: 305 """ 306 A limit to ``run_duration_secs``, that cancels running the query when exceeded. 307 308 Defaults to no timeout. 309 310 The client will raise a ``GiveupError`` if the timeout is reached. 311 312 Not to be confused with ``timeout_secs``, which is a setting for the Overpass API instance, 313 that limits a single query execution time. Instead, this value can be used to limit the 314 total client-side time spent on this query (see ``Client.run_query``). 315 """ 316 return self._run_timeout_secs 317 318 @run_timeout_secs.setter 319 def run_timeout_secs(self, value: float | None) -> None: 320 if value is not None and (not math.isfinite(value) or value <= 0.0): 321 msg = "'run_timeout_secs' must be finite > 0" 322 raise ValueError(msg) 323 self._run_timeout_secs = value 324 325 @property 326 def run_timeout_elapsed(self) -> bool: 327 """Returns ``True`` if ``run_timeout_secs`` is set and has elapsed.""" 328 return ( 329 self.run_timeout_secs is not None 330 and self.run_duration_secs is not None 331 and self.run_timeout_secs < self.run_duration_secs 332 ) 333 334 @property 335 def request_timeout(self) -> "RequestTimeout": 336 """Request timeout settings for this query.""" 337 return self._request_timeout 338 339 @request_timeout.setter 340 def request_timeout(self, value: "RequestTimeout") -> None: 341 self._request_timeout = value 342 343 def _code(self, next_timeout_secs_used: int) -> str: 344 """The query's QL code, substituting the [timeout:*] setting with the given duration.""" 345 settings_copy = self._settings.copy() 346 settings_copy["timeout"] = next_timeout_secs_used 347 348 # remove the original settings statement 349 code = _SETTING_PATTERN.sub("", self._input_code) 350 351 # put the adjusted settings in front 352 settings = "".join((f"[{k}:{v}]" for k, v in settings_copy.items())) + ";" 353 354 return f"{settings}\n{code}" 355 356 @property 357 def cache_key(self) -> str: 358 """ 359 Hash QL code, and return its digest as hexadecimal string. 360 361 The default query runner uses this as cache key. 362 """ 363 # Remove the original settings statement 364 code = _SETTING_PATTERN.sub("", self._input_code) 365 hasher = hashlib.blake2b(digest_size=8) 366 hasher.update(code.encode("utf-8")) 367 return hasher.hexdigest() 368 369 @property 370 def done(self) -> bool: 371 """Returns ``True`` if the result set was received.""" 372 return self._response is not None 373 374 @property 375 def request_duration_secs(self) -> float | None: 376 """ 377 How long it took to fetch the result set in seconds. 378 379 This is the duration starting with the API request, and ending once 380 the result is written to this query object. Although it depends on how busy 381 the API instance is, this can give some indication of how long a query takes. 382 383 Returns: 384 the duration or ``None`` if there is no result set yet, or when it was cached. 385 """ 386 if self._response is None or self.was_cached: 387 return None 388 389 assert self._time_end_try is not None 390 assert self._time_start_req is not None 391 392 return self._time_end_try - self._time_start_req 393 394 @property 395 def run_duration_secs(self) -> float | None: 396 """ 397 The total required time for this query in seconds (so far). 398 399 Returns: 400 the duration or ``None`` if there is no result set yet, or when it was cached. 401 """ 402 if self._time_start is None: 403 return None 404 405 if self._time_end_try: 406 return self._time_end_try - self._time_start 407 408 return self._time_start.elapsed_secs_since 409 410 @property 411 def _run_duration_left_secs(self) -> float | None: 412 """If a limit was set, returns the seconds until the time to run the query has elapsed.""" 413 if (time_max := self.run_timeout_secs) and (time_so_far := self.run_duration_secs): 414 return max(0, math.ceil(time_max - time_so_far)) 415 return None 416 417 @property 418 def api_version(self) -> str | None: 419 """ 420 The Overpass API version used by the queried instance. 421 422 Returns: 423 f.e. ``"Overpass API 0.7.56.8 7d656e78"``, or ``None`` if the query 424 has not successfully finished (yet) 425 426 References: 427 - https://wiki.openstreetmap.org/wiki/Overpass_API/versions 428 """ 429 if self._response is None: 430 return None 431 432 return self._response["generator"] 433 434 @property 435 def timestamp_osm(self) -> datetime | None: 436 """ 437 All OSM edits that have been uploaded before this date are included. 438 439 It can take a couple of minutes for changes to the database to show up in the 440 Overpass API query results. 441 442 Returns: 443 the timestamp, or ``None`` if the query has not successfully finished (yet) 444 """ 445 if self._response is None: 446 return None 447 448 date_str = self._response["osm3s"]["timestamp_osm_base"] 449 return datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ").astimezone(UTC) 450 451 @property 452 def timestamp_areas(self) -> datetime | None: 453 """ 454 All area data edits that have been uploaded before this date are included. 455 456 If the query involves area data processing, this is the date of the latest edit 457 that has been considered in the most recent batch run of the area generation. 458 459 Returns: 460 the timestamp, or ``None`` if the query has not successfully finished (yet), or 461 if it does not involve area data processing. 462 """ 463 if self._response is None: 464 return None 465 466 date_str = self._response["osm3s"].get("timestamp_areas_base") 467 if not date_str: 468 return None 469 470 return datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ").astimezone(UTC) 471 472 @property 473 def copyright(self) -> str: 474 """A copyright notice that comes with the result set.""" 475 if self._response is None: 476 return _COPYRIGHT 477 478 return self._response["osm3s"].get("copyright") or _COPYRIGHT 479 480 def __str__(self) -> str: 481 query = f"query{self.kwargs!r}" 482 483 size = self.response_size_mib 484 time_request = self.request_duration_secs 485 time_total = self.run_duration_secs 486 487 if self.nb_tries == 0: 488 details = "pending" 489 elif self.done: 490 if self.nb_tries == 1: 491 details = f"done - {size:.01f}mb in {time_request:.01f}s" 492 else: 493 details = f"done after {time_total:.01f}s - {size:.01f}mb in {time_request:.01f}s" 494 else: 495 t = "try" if self.nb_tries == 1 else "tries" 496 details = f"failing after {self.nb_tries} {t}, {time_total:.01f}s" 497 498 return f"{query} ({details})" 499 500 def __repr__(self) -> str: 501 cls_name = type(self).__name__ 502 503 details = { 504 "kwargs": self._kwargs, 505 "done": self.done, 506 } 507 508 if self.nb_tries == 0 or self.error: 509 details["tries"] = self.nb_tries 510 511 if self.error: 512 details["error"] = type(self.error).__name__ 513 514 if self.done: 515 details["response_size"] = f"{self.response_size_mib:.02f}mb" 516 517 if not self.was_cached: 518 details["request_duration"] = f"{self.request_duration_secs:.02f}s" 519 520 if self.nb_tries > 0: 521 details["run_duration"] = f"{self.run_duration_secs:.02f}s" 522 523 details_str = ", ".join((f"{k}={v!r}" for k, v in details.items())) 524 525 return f"{cls_name}({details_str})" 526 527 def _begin_try(self) -> None: 528 """First thing to call when starting the next try, after invoking the query runner.""" 529 if self._time_start is None: 530 self._time_start = Instant.now() 531 532 self._time_start_try = Instant.now() 533 self._time_start_req = None 534 self._time_end_try = None 535 536 def _begin_request(self) -> None: 537 """Call before making the API call of a try, after waiting for cooldown.""" 538 self._time_start_req = Instant.now() 539 540 def _succeed_try(self, response: dict, response_bytes: int) -> None: 541 """Call when the API call of a try was successful.""" 542 self._time_end_try = Instant.now() 543 self._response = response 544 self._response_bytes = response_bytes 545 self._error = None 546 547 def _fail_try(self, err: ClientError) -> None: 548 """Call when the API call of a try failed.""" 549 self._error = err 550 551 if is_exceeding_timeout(err): 552 self._max_timed_out_after_secs = err.timed_out_after_secs 553 554 def _end_try(self) -> None: 555 """Final call in a try.""" 556 self._nb_tries += 1
State of a query that is either pending, running, successful, or failed.
Arguments:
- input_code: The input Overpass QL code. Note that some settings might be changed by query runners, notably the 'timeout' and 'maxsize' settings.
- logger: The logger to use for all logging output related to this query.
- **kwargs: Additional keyword arguments that can be used to identify queries.
References:
93 def __init__( 94 self, 95 input_code: str, 96 logger: logging.Logger = _NULL_LOGGER, 97 **kwargs: Any, # noqa: ANN401 98 ) -> None: 99 self._run_lock: Final[threading.Lock] = threading.Lock() 100 """a lock used to ensure a query cannot be run more than once at the same time""" 101 102 self._input_code: Final[str] = input_code 103 """the original given overpass ql code""" 104 105 self._logger: Final[logging.Logger] = logger 106 """logger to use for this query""" 107 108 self._kwargs: Final[dict] = kwargs 109 """used to identify this query""" 110 111 self._settings = dict(_SETTING_PATTERN.findall(input_code)) 112 """all overpass ql settings [k:v];""" 113 114 if "out" in self._settings and self._settings["out"] != "json": 115 msg = "the '[out:*]' setting is implicitly set to 'json' and should be omitted" 116 raise ValueError(msg) 117 118 self._settings["out"] = "json" 119 120 if "maxsize" not in self._settings: 121 self._settings["maxsize"] = DEFAULT_MAXSIZE_MIB * 1024 * 1024 122 elif not self._settings["maxsize"].isdigit() or int(self._settings["maxsize"]) <= 0: 123 msg = "the '[maxsize:*]' setting must be an integer > 0" 124 raise ValueError(msg) 125 126 if "timeout" not in self._settings: 127 self._settings["timeout"] = DEFAULT_TIMEOUT_SECS 128 elif not self._settings["timeout"].isdigit() or int(self._settings["timeout"]) <= 0: 129 msg = "the '[timeout:*]' setting must be an integer > 0" 130 raise ValueError(msg) 131 132 self._run_timeout_secs: float | None = None 133 """total time limit for running this query""" 134 135 self._request_timeout: RequestTimeout = RequestTimeout() 136 """config for request timeouts""" 137 138 self._error: ClientError | None = None 139 """error of the last try, or None""" 140 141 self._response: dict | None = None 142 """response JSON as a dict, or None""" 143 144 self._response_bytes = 0.0 145 """number of bytes in a response, or zero""" 146 147 self._nb_tries = 0 148 """number of tries so far, starting at zero""" 149 150 self._time_start: Instant | None = None 151 """time prior to executing the first try""" 152 153 self._time_start_try: Instant | None = None 154 """time prior to executing the most recent try""" 155 156 self._time_start_req: Instant | None = None 157 """time prior to executing the most recent try's query request""" 158 159 self._time_end_try: Instant | None = None 160 """time the most recent try finished""" 161 162 self._max_timed_out_after_secs: int | None = None 163 """maximum of seconds after which the query was cancelled"""
165 def reset(self) -> None: 166 """Reset the query to its initial state, ignoring previous tries.""" 167 Query.__init__( 168 self, 169 input_code=self._input_code, 170 logger=self._logger, 171 **self._kwargs, 172 )
Reset the query to its initial state, ignoring previous tries.
174 @property 175 def input_code(self) -> str: 176 """The original input Overpass QL source code.""" 177 return self._input_code
The original input Overpass QL source code.
179 @property 180 def kwargs(self) -> dict: 181 """ 182 Keyword arguments that can be used to identify queries. 183 184 The default query runner will log these values when a query is run. 185 """ 186 return self._kwargs
Keyword arguments that can be used to identify queries.
The default query runner will log these values when a query is run.
188 @property 189 def logger(self) -> logging.Logger: 190 """The logger used for logging output related to this query.""" 191 return self._logger
The logger used for logging output related to this query.
193 @property 194 def nb_tries(self) -> int: 195 """Current number of tries.""" 196 return self._nb_tries
Current number of tries.
198 @property 199 def error(self) -> ClientError | None: 200 """ 201 Error of the most recent try. 202 203 Returns: 204 an error or ``None`` if the query wasn't tried or hasn't failed 205 """ 206 return self._error
Error of the most recent try.
Returns:
an error or
None
if the query wasn't tried or hasn't failed
208 @property 209 def response(self) -> dict | None: 210 """ 211 The entire JSON response of the query. 212 213 Returns: 214 the response, or ``None`` if the query has not successfully finished (yet) 215 """ 216 return self._response
The entire JSON response of the query.
Returns:
the response, or
None
if the query has not successfully finished (yet)
218 @property 219 def was_cached(self) -> bool | None: 220 """ 221 Indicates whether the query result was cached. 222 223 Returns: 224 ``None`` if the query has not been run yet. 225 ``True`` if the query has a result set with zero tries. 226 ``False`` otherwise. 227 """ 228 if self._response is None: 229 return None 230 return self._nb_tries == 0
Indicates whether the query result was cached.
Returns:
None
if the query has not been run yet.True
if the query has a result set with zero tries.False
otherwise.
232 @property 233 def result_set(self) -> list[dict] | None: 234 """ 235 The result set of the query. 236 237 This is open data, licensed under the Open Data Commons Open Database License (ODbL). 238 You are free to copy, distribute, transmit and adapt this data, as long as you credit 239 OpenStreetMap and its contributors. If you alter or build upon this data, you may 240 distribute the result only under the same licence. 241 242 Returns: 243 the elements of the result set, or ``None`` if the query has not successfully 244 finished (yet) 245 246 References: 247 - https://www.openstreetmap.org/copyright 248 - https://opendatacommons.org/licenses/odbl/1-0/ 249 """ 250 if not self._response: 251 return None 252 return self._response["elements"]
The result set of the query.
This is open data, licensed under the Open Data Commons Open Database License (ODbL). You are free to copy, distribute, transmit and adapt this data, as long as you credit OpenStreetMap and its contributors. If you alter or build upon this data, you may distribute the result only under the same licence.
Returns:
the elements of the result set, or
None
if the query has not successfully finished (yet)
References:
254 @property 255 def response_size_mib(self) -> float | None: 256 """ 257 The size of the response in mebibytes. 258 259 Returns: 260 the size, or ``None`` if the query has not successfully finished (yet) 261 """ 262 if self._response is None: 263 return None 264 return self._response_bytes / 1024.0 / 1024.0
The size of the response in mebibytes.
Returns:
the size, or
None
if the query has not successfully finished (yet)
266 @property 267 def maxsize_mib(self) -> float: 268 """ 269 The current value of the [maxsize:*] setting in mebibytes. 270 271 This size indicates the maximum allowed memory for the query in bytes RAM on the server, 272 as expected by the user. If the query needs more RAM than this value, the server may abort 273 the query with a memory exhaustion. The higher this size, the more probably the server 274 rejects the query before executing it. 275 """ 276 return float(self._settings["maxsize"]) // 1024.0 // 1024.0
The current value of the [maxsize:*] setting in mebibytes.
This size indicates the maximum allowed memory for the query in bytes RAM on the server, as expected by the user. If the query needs more RAM than this value, the server may abort the query with a memory exhaustion. The higher this size, the more probably the server rejects the query before executing it.
285 @property 286 def timeout_secs(self) -> int: 287 """ 288 The current value of the [timeout:*] setting in seconds. 289 290 This duration is the maximum allowed runtime for the query in seconds, as expected by the 291 user. If the query runs longer than this time, the server may abort the query. The higher 292 this duration, the more probably the server rejects the query before executing it. 293 """ 294 return int(self._settings["timeout"])
The current value of the [timeout:*] setting in seconds.
This duration is the maximum allowed runtime for the query in seconds, as expected by the user. If the query runs longer than this time, the server may abort the query. The higher this duration, the more probably the server rejects the query before executing it.
303 @property 304 def run_timeout_secs(self) -> float | None: 305 """ 306 A limit to ``run_duration_secs``, that cancels running the query when exceeded. 307 308 Defaults to no timeout. 309 310 The client will raise a ``GiveupError`` if the timeout is reached. 311 312 Not to be confused with ``timeout_secs``, which is a setting for the Overpass API instance, 313 that limits a single query execution time. Instead, this value can be used to limit the 314 total client-side time spent on this query (see ``Client.run_query``). 315 """ 316 return self._run_timeout_secs
A limit to run_duration_secs
, that cancels running the query when exceeded.
Defaults to no timeout.
The client will raise a GiveupError
if the timeout is reached.
Not to be confused with timeout_secs
, which is a setting for the Overpass API instance,
that limits a single query execution time. Instead, this value can be used to limit the
total client-side time spent on this query (see Client.run_query
).
325 @property 326 def run_timeout_elapsed(self) -> bool: 327 """Returns ``True`` if ``run_timeout_secs`` is set and has elapsed.""" 328 return ( 329 self.run_timeout_secs is not None 330 and self.run_duration_secs is not None 331 and self.run_timeout_secs < self.run_duration_secs 332 )
Returns True
if run_timeout_secs
is set and has elapsed.
334 @property 335 def request_timeout(self) -> "RequestTimeout": 336 """Request timeout settings for this query.""" 337 return self._request_timeout
Request timeout settings for this query.
356 @property 357 def cache_key(self) -> str: 358 """ 359 Hash QL code, and return its digest as hexadecimal string. 360 361 The default query runner uses this as cache key. 362 """ 363 # Remove the original settings statement 364 code = _SETTING_PATTERN.sub("", self._input_code) 365 hasher = hashlib.blake2b(digest_size=8) 366 hasher.update(code.encode("utf-8")) 367 return hasher.hexdigest()
Hash QL code, and return its digest as hexadecimal string.
The default query runner uses this as cache key.
369 @property 370 def done(self) -> bool: 371 """Returns ``True`` if the result set was received.""" 372 return self._response is not None
Returns True
if the result set was received.
374 @property 375 def request_duration_secs(self) -> float | None: 376 """ 377 How long it took to fetch the result set in seconds. 378 379 This is the duration starting with the API request, and ending once 380 the result is written to this query object. Although it depends on how busy 381 the API instance is, this can give some indication of how long a query takes. 382 383 Returns: 384 the duration or ``None`` if there is no result set yet, or when it was cached. 385 """ 386 if self._response is None or self.was_cached: 387 return None 388 389 assert self._time_end_try is not None 390 assert self._time_start_req is not None 391 392 return self._time_end_try - self._time_start_req
How long it took to fetch the result set in seconds.
This is the duration starting with the API request, and ending once the result is written to this query object. Although it depends on how busy the API instance is, this can give some indication of how long a query takes.
Returns:
the duration or
None
if there is no result set yet, or when it was cached.
394 @property 395 def run_duration_secs(self) -> float | None: 396 """ 397 The total required time for this query in seconds (so far). 398 399 Returns: 400 the duration or ``None`` if there is no result set yet, or when it was cached. 401 """ 402 if self._time_start is None: 403 return None 404 405 if self._time_end_try: 406 return self._time_end_try - self._time_start 407 408 return self._time_start.elapsed_secs_since
The total required time for this query in seconds (so far).
Returns:
the duration or
None
if there is no result set yet, or when it was cached.
417 @property 418 def api_version(self) -> str | None: 419 """ 420 The Overpass API version used by the queried instance. 421 422 Returns: 423 f.e. ``"Overpass API 0.7.56.8 7d656e78"``, or ``None`` if the query 424 has not successfully finished (yet) 425 426 References: 427 - https://wiki.openstreetmap.org/wiki/Overpass_API/versions 428 """ 429 if self._response is None: 430 return None 431 432 return self._response["generator"]
The Overpass API version used by the queried instance.
Returns:
f.e.
"Overpass API 0.7.56.8 7d656e78"
, orNone
if the query has not successfully finished (yet)
References:
434 @property 435 def timestamp_osm(self) -> datetime | None: 436 """ 437 All OSM edits that have been uploaded before this date are included. 438 439 It can take a couple of minutes for changes to the database to show up in the 440 Overpass API query results. 441 442 Returns: 443 the timestamp, or ``None`` if the query has not successfully finished (yet) 444 """ 445 if self._response is None: 446 return None 447 448 date_str = self._response["osm3s"]["timestamp_osm_base"] 449 return datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ").astimezone(UTC)
All OSM edits that have been uploaded before this date are included.
It can take a couple of minutes for changes to the database to show up in the Overpass API query results.
Returns:
the timestamp, or
None
if the query has not successfully finished (yet)
451 @property 452 def timestamp_areas(self) -> datetime | None: 453 """ 454 All area data edits that have been uploaded before this date are included. 455 456 If the query involves area data processing, this is the date of the latest edit 457 that has been considered in the most recent batch run of the area generation. 458 459 Returns: 460 the timestamp, or ``None`` if the query has not successfully finished (yet), or 461 if it does not involve area data processing. 462 """ 463 if self._response is None: 464 return None 465 466 date_str = self._response["osm3s"].get("timestamp_areas_base") 467 if not date_str: 468 return None 469 470 return datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ").astimezone(UTC)
All area data edits that have been uploaded before this date are included.
If the query involves area data processing, this is the date of the latest edit that has been considered in the most recent batch run of the area generation.
Returns:
the timestamp, or
None
if the query has not successfully finished (yet), or if it does not involve area data processing.