Metadata-Version: 2.1
Name: jina
Version: 3.16.1
Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps
Home-page: https://github.com/jina-ai/jina/
Download-URL: https://github.com/jina-ai/jina/tags
Author: Jina AI
Author-email: hello@jina.ai
License: Apache 2.0
Project-URL: Documentation, https://docs.jina.ai
Project-URL: Source, https://github.com/jina-ai/jina/
Project-URL: Tracker, https://github.com/jina-ai/jina/issues
Keywords: jina cloud-native cross-modal multimodal neural-search query search index elastic neural-network encoding embedding serving docker container image video audio deep-learning mlops
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Unix Shell
Classifier: Environment :: Console
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Database :: Database Engines/Servers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Multimedia :: Video
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiohttp
Requires-Dist: grpcio (<1.48.1,>=1.46.0)
Requires-Dist: requests
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0)
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0)
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1)
Requires-Dist: docarray (<0.30.0,>=0.16.4)
Requires-Dist: numpy
Requires-Dist: packaging (>=20.0)
Requires-Dist: pyyaml (>=5.3.1)
Requires-Dist: python-multipart
Requires-Dist: docker
Requires-Dist: fastapi (>=0.76.0)
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0)
Requires-Dist: filelock
Requires-Dist: pydantic
Requires-Dist: pathspec
Requires-Dist: opentelemetry-sdk (>=1.14.0)
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0)
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0)
Requires-Dist: aiofiles
Requires-Dist: opentelemetry-api (>=1.12.0)
Requires-Dist: prometheus-client (>=0.12.0)
Requires-Dist: jina-hubble-sdk (>=0.30.4)
Requires-Dist: jcloud (>=0.0.35)
Requires-Dist: uvicorn[standard]
Requires-Dist: urllib3 (<2.0.0)
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0)
Requires-Dist: websockets
Requires-Dist: protobuf (>=3.19.0)
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0)
Requires-Dist: uvloop ; platform_system != "Windows"
Provides-Extra: pillow
Requires-Dist: Pillow ; extra == 'pillow'
Provides-Extra: aiofiles
Requires-Dist: aiofiles ; extra == 'aiofiles'
Provides-Extra: aiohttp
Requires-Dist: aiohttp ; extra == 'aiohttp'
Provides-Extra: all
Requires-Dist: aiohttp ; extra == 'all'
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'all'
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'all'
Requires-Dist: flaky ; extra == 'all'
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: requests ; extra == 'all'
Requires-Dist: Pillow ; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'all'
Requires-Dist: pytest-repeat ; extra == 'all'
Requires-Dist: pytest ; extra == 'all'
Requires-Dist: bs4 ; extra == 'all'
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'all'
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'all'
Requires-Dist: docarray (<0.30.0,>=0.16.4) ; extra == 'all'
Requires-Dist: numpy ; extra == 'all'
Requires-Dist: pytest-asyncio ; extra == 'all'
Requires-Dist: packaging (>=20.0) ; extra == 'all'
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'all'
Requires-Dist: python-multipart ; extra == 'all'
Requires-Dist: docker ; extra == 'all'
Requires-Dist: tensorflow (>=2.0) ; extra == 'all'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'all'
Requires-Dist: filelock ; extra == 'all'
Requires-Dist: pytest-lazy-fixture ; extra == 'all'
Requires-Dist: pytest-custom-exit-code ; extra == 'all'
Requires-Dist: sgqlc ; extra == 'all'
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'all'
Requires-Dist: pytest-timeout ; extra == 'all'
Requires-Dist: pydantic ; extra == 'all'
Requires-Dist: psutil ; extra == 'all'
Requires-Dist: pathspec ; extra == 'all'
Requires-Dist: black (==22.3.0) ; extra == 'all'
Requires-Dist: torch ; extra == 'all'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'all'
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'all'
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: pytest-reraise ; extra == 'all'
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'all'
Requires-Dist: jsonschema ; extra == 'all'
Requires-Dist: scipy (>=1.6.1) ; extra == 'all'
Requires-Dist: coverage (==6.2) ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'all'
Requires-Dist: requests-mock ; extra == 'all'
Requires-Dist: aiofiles ; extra == 'all'
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'all'
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'all'
Requires-Dist: jcloud (>=0.0.35) ; extra == 'all'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'all'
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'all'
Requires-Dist: urllib3 (<2.0.0) ; extra == 'all'
Requires-Dist: uvicorn[standard] ; extra == 'all'
Requires-Dist: pytest-mock ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'all'
Requires-Dist: protobuf (>=3.19.0) ; extra == 'all'
Requires-Dist: websockets ; extra == 'all'
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'all'
Requires-Dist: mock ; extra == 'all'
Requires-Dist: uvloop ; (platform_system != "Windows") and extra == 'all'
Provides-Extra: black
Requires-Dist: black (==22.3.0) ; extra == 'black'
Provides-Extra: bs4
Requires-Dist: bs4 ; extra == 'bs4'
Provides-Extra: cicd
Requires-Dist: sgqlc ; extra == 'cicd'
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'cicd'
Requires-Dist: torch ; extra == 'cicd'
Requires-Dist: tensorflow (>=2.0) ; extra == 'cicd'
Requires-Dist: jsonschema ; extra == 'cicd'
Requires-Dist: bs4 ; extra == 'cicd'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'cicd'
Provides-Extra: core
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'core'
Requires-Dist: docarray (<0.30.0,>=0.16.4) ; extra == 'core'
Requires-Dist: numpy ; extra == 'core'
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'core'
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'core'
Requires-Dist: packaging (>=20.0) ; extra == 'core'
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'core'
Requires-Dist: jcloud (>=0.0.35) ; extra == 'core'
Requires-Dist: urllib3 (<2.0.0) ; extra == 'core'
Requires-Dist: protobuf (>=3.19.0) ; extra == 'core'
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'core'
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'core'
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'core'
Provides-Extra: coverage
Requires-Dist: coverage (==6.2) ; extra == 'coverage'
Provides-Extra: devel
Requires-Dist: aiohttp ; extra == 'devel'
Requires-Dist: requests ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'devel'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'devel'
Requires-Dist: python-multipart ; extra == 'devel'
Requires-Dist: docker ; extra == 'devel'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'devel'
Requires-Dist: filelock ; extra == 'devel'
Requires-Dist: sgqlc ; extra == 'devel'
Requires-Dist: pydantic ; extra == 'devel'
Requires-Dist: pathspec ; extra == 'devel'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'devel'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'devel'
Requires-Dist: aiofiles ; extra == 'devel'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'devel'
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'devel'
Requires-Dist: uvicorn[standard] ; extra == 'devel'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'devel'
Requires-Dist: websockets ; extra == 'devel'
Requires-Dist: uvloop ; (platform_system != "Windows") and extra == 'devel'
Provides-Extra: docarray
Requires-Dist: docarray (<0.30.0,>=0.16.4) ; extra == 'docarray'
Provides-Extra: docker
Requires-Dist: docker ; extra == 'docker'
Provides-Extra: fastapi
Requires-Dist: fastapi (>=0.76.0) ; extra == 'fastapi'
Provides-Extra: filelock
Requires-Dist: filelock ; extra == 'filelock'
Provides-Extra: flaky
Requires-Dist: flaky ; extra == 'flaky'
Provides-Extra: grpcio
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'grpcio'
Provides-Extra: grpcio-health-checking
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'grpcio-health-checking'
Provides-Extra: grpcio-reflection
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'grpcio-reflection'
Provides-Extra: jcloud
Requires-Dist: jcloud (>=0.0.35) ; extra == 'jcloud'
Provides-Extra: jina-hubble-sdk
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'jina-hubble-sdk'
Provides-Extra: jsonschema
Requires-Dist: jsonschema ; extra == 'jsonschema'
Provides-Extra: kubernetes
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'kubernetes'
Provides-Extra: mock
Requires-Dist: mock ; extra == 'mock'
Provides-Extra: numpy
Requires-Dist: numpy ; extra == 'numpy'
Provides-Extra: opentelemetry-api
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'opentelemetry-api'
Provides-Extra: opentelemetry-exporter-otlp
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'opentelemetry-exporter-otlp'
Provides-Extra: opentelemetry-exporter-otlp-proto-grpc
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'opentelemetry-exporter-otlp-proto-grpc'
Provides-Extra: opentelemetry-exporter-prometheus
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'opentelemetry-exporter-prometheus'
Provides-Extra: opentelemetry-instrumentation-aiohttp-client
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'opentelemetry-instrumentation-aiohttp-client'
Provides-Extra: opentelemetry-instrumentation-fastapi
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'opentelemetry-instrumentation-fastapi'
Provides-Extra: opentelemetry-instrumentation-grpc
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'opentelemetry-instrumentation-grpc'
Provides-Extra: opentelemetry-sdk
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'opentelemetry-sdk'
Provides-Extra: opentelemetry-test-utils
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'opentelemetry-test-utils'
Provides-Extra: packaging
Requires-Dist: packaging (>=20.0) ; extra == 'packaging'
Provides-Extra: pathspec
Requires-Dist: pathspec ; extra == 'pathspec'
Provides-Extra: perf
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'perf'
Requires-Dist: uvloop ; extra == 'perf'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'perf'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'perf'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'perf'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'perf'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'perf'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'perf'
Provides-Extra: portforward
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'portforward'
Provides-Extra: prometheus-api-client
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'prometheus-api-client'
Provides-Extra: prometheus_client
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'prometheus_client'
Provides-Extra: protobuf
Requires-Dist: protobuf (>=3.19.0) ; extra == 'protobuf'
Provides-Extra: psutil
Requires-Dist: psutil ; extra == 'psutil'
Provides-Extra: pydantic
Requires-Dist: pydantic ; extra == 'pydantic'
Provides-Extra: pytest
Requires-Dist: pytest ; extra == 'pytest'
Provides-Extra: pytest-asyncio
Requires-Dist: pytest-asyncio ; extra == 'pytest-asyncio'
Provides-Extra: pytest-cov
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'pytest-cov'
Provides-Extra: pytest-custom_exit_code
Requires-Dist: pytest-custom-exit-code ; extra == 'pytest-custom_exit_code'
Provides-Extra: pytest-kind
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'pytest-kind'
Provides-Extra: pytest-lazy-fixture
Requires-Dist: pytest-lazy-fixture ; extra == 'pytest-lazy-fixture'
Provides-Extra: pytest-mock
Requires-Dist: pytest-mock ; extra == 'pytest-mock'
Provides-Extra: pytest-repeat
Requires-Dist: pytest-repeat ; extra == 'pytest-repeat'
Provides-Extra: pytest-reraise
Requires-Dist: pytest-reraise ; extra == 'pytest-reraise'
Provides-Extra: pytest-timeout
Requires-Dist: pytest-timeout ; extra == 'pytest-timeout'
Provides-Extra: python-multipart
Requires-Dist: python-multipart ; extra == 'python-multipart'
Provides-Extra: pyyaml
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'pyyaml'
Provides-Extra: requests
Requires-Dist: requests ; extra == 'requests'
Provides-Extra: requests-mock
Requires-Dist: requests-mock ; extra == 'requests-mock'
Provides-Extra: scipy
Requires-Dist: scipy (>=1.6.1) ; extra == 'scipy'
Provides-Extra: sgqlc
Requires-Dist: sgqlc ; extra == 'sgqlc'
Provides-Extra: standard
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'standard'
Requires-Dist: uvloop ; extra == 'standard'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'standard'
Requires-Dist: aiohttp ; extra == 'standard'
Requires-Dist: pydantic ; extra == 'standard'
Requires-Dist: pathspec ; extra == 'standard'
Requires-Dist: uvicorn[standard] ; extra == 'standard'
Requires-Dist: docker ; extra == 'standard'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'standard'
Requires-Dist: requests ; extra == 'standard'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'standard'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'standard'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'standard'
Requires-Dist: websockets ; extra == 'standard'
Requires-Dist: python-multipart ; extra == 'standard'
Requires-Dist: filelock ; extra == 'standard'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'standard'
Requires-Dist: aiofiles ; extra == 'standard'
Provides-Extra: standrad
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'standrad'
Provides-Extra: strawberry-graphql
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'strawberry-graphql'
Provides-Extra: tensorflow
Requires-Dist: tensorflow (>=2.0) ; extra == 'tensorflow'
Provides-Extra: test
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'test'
Requires-Dist: flaky ; extra == 'test'
Requires-Dist: Pillow ; extra == 'test'
Requires-Dist: pytest-repeat ; extra == 'test'
Requires-Dist: pytest ; extra == 'test'
Requires-Dist: pytest-asyncio ; extra == 'test'
Requires-Dist: pytest-lazy-fixture ; extra == 'test'
Requires-Dist: pytest-custom-exit-code ; extra == 'test'
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'test'
Requires-Dist: pytest-timeout ; extra == 'test'
Requires-Dist: psutil ; extra == 'test'
Requires-Dist: black (==22.3.0) ; extra == 'test'
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'test'
Requires-Dist: pytest-reraise ; extra == 'test'
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'test'
Requires-Dist: scipy (>=1.6.1) ; extra == 'test'
Requires-Dist: coverage (==6.2) ; extra == 'test'
Requires-Dist: requests-mock ; extra == 'test'
Requires-Dist: pytest-mock ; extra == 'test'
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'test'
Requires-Dist: mock ; extra == 'test'
Provides-Extra: torch
Requires-Dist: torch ; extra == 'torch'
Provides-Extra: urllib3
Requires-Dist: urllib3 (<2.0.0) ; extra == 'urllib3'
Provides-Extra: uvicorn_standard_
Requires-Dist: uvicorn[standard] ; extra == 'uvicorn_standard_'
Provides-Extra: uvloop
Requires-Dist: uvloop ; extra == 'uvloop'
Provides-Extra: watchfiles
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'watchfiles'
Provides-Extra: websockets
Requires-Dist: websockets ; extra == 'websockets'

<p align="center">
<!-- survey banner start -->
<a href="https://10sw1tcpld4.typeform.com/to/EGAEReM7?utm_source=readme&utm_medium=github&utm_campaign=user%20experience&utm_term=feb2023&utm_content=survey">
  <img src="./.github/banner.svg?raw=true">
</a>
<!-- survey banner start -->

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/docs/_static/logo-light.svg?raw=true" alt="Jina logo: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · Cloud Native" width="150px"></a>
<br><br><br>
</p>

<p align="center">
<b>Build multimodal AI services with cloud native technologies</b>
</p>


<p align=center>
<a href="https://pypi.org/project/jina/"><img alt="PyPI" src="https://img.shields.io/pypi/v/jina?label=Release&style=flat-square"></a>
<!--<a href="https://codecov.io/gh/jina-ai/jina"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/jina-ai/jina/master?&logo=Codecov&logoColor=white&style=flat-square"></a>-->
<a href="https://discord.jina.ai"><img src="https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square"></a>
<a href="https://pypistats.org/packages/jina"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/jina?style=flat-square"></a>
<a href="https://github.com/jina-ai/jina/actions/workflows/cd.yml"><img alt="Github CD status" src="https://github.com/jina-ai/jina/actions/workflows/cd.yml/badge.svg"></a>
</p>

<!-- start jina-description -->

Jina is an MLOps framework to build multimodal AI microservice-based applications written in Python that can communicate via gRPC, HTTP and WebSocket protocols.
It allows developers to build and serve **services** and **pipelines** while **scaling** and **deploying** them to a production while removing the complexity, letting them focus on the 
logic/algorithmic part, saving valuable time and resources for engineering teams.

Jina aims to provide a smooth Pythonic experience transitioning from local deployment to deploying to advanced orchestration frameworks such as Docker-Compose, Kubernetes, or Jina AI Cloud.
It handles the infrastructure complexity, making advanced solution engineering and cloud-native technologies accessible to every developer.



<p align="center">
<strong><a href="#build-ai-services">Build and deploy a gRPC microservice</a> • <a href="#build-a-pipeline">Build and deploy a pipeline</a></strong>
</p>

Applications built with Jina enjoy the following features out of the box:

🌌 **Universal**
  - Build applications that deliver fresh insights from multiple data types such as text, image, audio, video, 3D mesh, PDF with [LF's DocArray](https://github.com/docarray/docarray).
  - Support for all mainstream deep learning frameworks.
  - Polyglot gateway that supports gRPC, Websockets, HTTP, GraphQL protocols with TLS.

⚡ **Performance**
  - Intuitive design pattern for high-performance microservices.
  - Easy scaling: set replicas, sharding in one line. 
  - Duplex streaming between client and server.
  - Async and non-blocking data processing over dynamic flows.

☁️ **Cloud native**
  - Seamless Docker container integration: sharing, exploring, sandboxing, versioning and dependency control via [Executor Hub](https://cloud.jina.ai).
  - Full observability via OpenTelemetry, Prometheus and Grafana.
  - Fast deployment to Kubernetes and Docker Compose.

🍱 **Ecosystem**
  - Improved engineering efficiency thanks to the Jina AI ecosystem, so you can focus on innovating with the data applications you build.
  - Free CPU/GPU hosting via [Jina AI Cloud](https://cloud.jina.ai).

Jina's value proposition may seem quite similar to that of FastAPI. However, there are several fundamental differences:

 **Data structure and communication protocols**
  - FastAPI communication relies on Pydantic and Jina relies on [DocArray](https://github.com/docarray/docarray) allowing Jina to support multiple protocols
  to expose its services.

 **Advanced orchestration and scaling capabilities**
  - Jina lets you deploy applications formed from multiple microservices that can be containerized and scaled independently.
  - Jina allows you to easily containerize and orchestrate your services, providing concurrency and scalability.

 **Journey to the cloud**
  - Jina provides a smooth transition from local development (using [DocArray](https://github.com/docarray/docarray)) to local serving using (Jina's orchestration layer)
  to having production-ready services by using Kubernetes capacity to orchestrate the lifetime of containers.
  - By using [Jina AI Cloud](https://cloud.jina.ai) you have access to scalable and serverless deployments of your applications in one command.

<!-- end jina-description -->

<p align="center">
<a href="#"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/core-tree-graph.svg?raw=true" alt="Jina in Jina AI neural search ecosystem" width="100%"></a>
</p>

## [Documentation](https://docs.jina.ai)

## Install 

```bash
pip install jina transformers sentencepiece
```

Find more install options on [Apple Silicon](https://docs.jina.ai/get-started/install/apple-silicon-m1-m2/)/[Windows](https://docs.jina.ai/get-started/install/windows/).

## Get Started

### Basic Concepts

Jina has four fundamental concepts:

- A [**Document**](https://docarray.jina.ai/) (from [DocArray](https://github.com/docarray/docarray)) is the input/output format in Jina.
- An [**Executor**](https://docs.jina.ai/concepts/serving/executor/) is a Python class that transforms and processes Documents.
- A [**Deployment**](https://docs.jina.ai/concepts/orchestration/deployment) serves a single Executor, while a [**Flow**](https://docs.jina.ai/concepts/orchestration/flow/) serves Executors chained into a pipeline.

[The full glossary is explained here](https://docs.jina.ai/concepts/preliminaries/#).

---

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/streamline-banner.png?raw=true" alt="Jina: Streamline AI & ML Product Delivery" width="100%"></a>
</p>

### Build AI Services
<!-- start build-ai-services -->

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb)

Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it.

> **Note**
> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).

> **Note**
> Run the [code in Colab](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=0l-lkmz4H-jW) to install all dependencies.

Let's implement the service's logic:

<table>
<tr>
<th><code>translate_executor.py</code> </th> 
<tr>
<td>

```python
from docarray import DocumentArray
from jina import Executor, requests
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM


class Translator(Executor):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.tokenizer = AutoTokenizer.from_pretrained(
            "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX"
        )
        self.model = AutoModelForSeq2SeqLM.from_pretrained(
            "facebook/mbart-large-50-many-to-many-mmt"
        )

    @requests
    def translate(self, docs: DocumentArray, **kwargs):
        for doc in docs:
            doc.text = self._translate(doc.text)

    def _translate(self, text):
        encoded_en = self.tokenizer(text, return_tensors="pt")
        generated_tokens = self.model.generate(
            **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"]
        )
        return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[
            0
        ]
```

</td>
</tr>
</table>

Then we deploy it with either the Python API or YAML:
<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>deployment.py</code> </th> 
<th> YAML: <code>deployment.yml</code> </th>
</tr>
<tr>
<td>

```python
from jina import Deployment
from translate_executor import Translator

with Deployment(uses=Translator, timeout_ready=-1) as dep:
    dep.block()
```

</td>
<td>

```yaml
jtype: Deployment
with:
  uses: Translator
  py_modules:
    - translate_executor.py # name of the module containing Translator
  timeout_ready: -1
```

And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`

</td>
</tr>
</table>
</div>

```text
──────────────────────────────────────── 🎉 Deployment is ready to serve! ─────────────────────────────────────────
╭────────────── 🔗 Endpoint ───────────────╮
│  ⛓      Protocol                   GRPC │
│  🏠        Local          0.0.0.0:12345  │
│  🔒      Private      172.28.0.12:12345  │
│  🌍       Public    35.230.97.208:12345  │
╰──────────────────────────────────────────╯
```

Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:

```python
from docarray import Document
from jina import Client

french_text = Document(
    text='un astronaut est en train de faire une promenade dans un parc'
)

client = Client(port=12345)  # use port from output above
response = client.post(on='/', inputs=[french_text])

print(response[0].text)
```

```text
an astronaut is walking in a park
```

<!-- end build-ai-services -->

> **Note**
> In a notebook, one cannot use `deployment.block()` and then make requests to the client. Please refer to the colab link above for reproducible Jupyter Notebook code snippets.


### Build a pipeline

<!-- start build-pipelines -->
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=YfNm1nScH30U)

Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in.

A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service.

> **Note**
> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).

For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service.

Build the Flow with either Python or YAML:

<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>flow.py</code> </th> 
<th> YAML: <code>flow.yml</code> </th>
</tr>
<tr>
<td>

```python
from jina import Flow

flow = (
    Flow()
    .add(uses=Translator, timeout_ready=-1)
    .add(
        uses='jinaai://jina-ai/TextToImage',
        timeout_ready=-1,
        install_requirements=True,
    )
)  # use the Executor from Executor hub

with flow:
    flow.block()
```

</td>
<td>

```yaml
jtype: Flow
executors:
  - uses: Translator
    timeout_ready: -1
    py_modules:
      - translate_executor.py
  - uses: jinaai://jina-ai/TextToImage
    timeout_ready: -1
    install_requirements: true
```

Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`

</td>
</tr>
</table>
</div>

```text
─────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────
╭────────────── 🔗 Endpoint ───────────────╮
│  ⛓      Protocol                   GRPC  │
│  🏠        Local          0.0.0.0:12345  │
│  🔒      Private      172.28.0.12:12345  │
│  🌍       Public    35.240.201.66:12345  │
╰──────────────────────────────────────────╯
```

Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:

```python
from jina import Client, Document

client = Client(port=12345)  # use port from output above

french_text = Document(
    text='un astronaut est en train de faire une promenade dans un parc'
)

response = client.post(on='/', inputs=[french_text])

response[0].display()
```


![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png)


You can also deploy a Flow to JCloud.

First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.

Then, use `jina cloud deploy` command to deploy to the cloud:


```shell
wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
jina cloud deploy jcloud-flow.yml
```

⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.**

Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).

<!-- end build-pipelines -->

Check [the getting-started project source code](https://github.com/jina-ai/jina/tree/master/.github/getting-started).

---

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/no-complexity-banner.png?raw=true" alt="Jina: No Infrastructure Complexity, High Engineering Efficiency" width="100%"></a>
</p>

Why not just use standard Python to build that microservice and pipeline? Jina accelerates time to market of your application by making it more scalable and cloud-native. Jina also handles the infrastructure complexity in production and other Day-2 operations so that you can focus on the data application itself.

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/scalability-banner.png?raw=true" alt="Jina: Scalability and concurrency with ease" width="100%"></a>
</p>

### Easy scalability and concurrency

Jina comes with scalability features out of the box like [replicas](https://docs.jina.ai/concepts/orchestration/scale-out/#replicate-executors), [shards](https://docs.jina.ai/concepts/orchestration/scale-out/#customize-polling-behaviors) and [dynamic batching](https://docs.jina.ai/concepts/serving/executor/dynamic-batching/).
This lets you easily increase your application's throughput.

Let's scale a Stable Diffusion Executor deployment with replicas and dynamic batching:

* Create two replicas, with [a GPU assigned for each](https://docs.jina.ai/concepts/flow/scale-out/#replicate-on-multiple-gpus).
* Enable dynamic batching to process incoming parallel requests together with the same model inference.


<div class="table-wrapper">
<table>
<tr>
<th> Normal Deployment </th> 
<th> Scaled Deployment </th>
</tr>
<tr>
<td>

```yaml
jtype: Deployment
with:
  timeout_ready: -1
  uses: jinaai://jina-ai/TextToImage
  install_requirements: true
```

</td>
<td>

```yaml
jtype: Deployment
with:
  timeout_ready: -1
  uses: jinaai://jina-ai/TextToImage
  install_requirements: true
  env:
   CUDA_VISIBLE_DEVICES: RR
  replicas: 2
  uses_dynamic_batching: # configure dynamic batching
    /default:
      preferred_batch_size: 10
      timeout: 200
```

</td>
</tr>
</table>
</div>


Assuming your machine has two GPUs, using the scaled deployment YAML will give better throughput compared to the normal deployment.

These features apply to both [Deployment YAML](https://docs.jina.ai/concepts/executor/deployment-yaml-spec/#deployment-yaml-spec) and [Flow YAML](https://docs.jina.ai/concepts/flow/yaml-spec/). Thanks to the YAML syntax, you can inject deployment configurations regardless of Executor code.

---

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/container-banner.png?raw=true" alt="Jina: Seamless Container Integration" width="100%"></a>
</p>

### Seamless container integration

Use [Executor Hub](https://cloud.jina.ai) to share your Executors or use public/private Executors, with no need to worry about dependencies.

To create an Executor:

```bash
jina hub new 
```

To push it to Executor Hub:

```bash
jina hub push .
```

To use a Hub Executor in your Flow:

|        | Docker container                           | Sandbox                                     | Source                              |
|--------|--------------------------------------------|---------------------------------------------|-------------------------------------|
| YAML   | `uses: jinaai+docker://<username>/MyExecutor`        | `uses: jinaai+sandbox://<username>/MyExecutor`        | `uses: jinaai://<username>/MyExecutor`        |
| Python | `.add(uses='jinaai+docker://<username>/MyExecutor')` | `.add(uses='jinaai+sandbox://<username>/MyExecutor')` | `.add(uses='jinaai://<username>/MyExecutor')` |

Executor Hub manages everything on the backend:

- Automated builds on the cloud
- Store, deploy, and deliver Executors cost-efficiently;
- Automatically resolve version conflicts and dependencies;
- Instant delivery of any Executor via [Sandbox](https://docs.jina.ai/concepts/executor/hub/sandbox/) without pulling anything to local.

---

<p align="center">
<a href="https://docs.jina.ai"><img src=".github/readme/cloud-native-banner.png?raw=true" alt="Jina: Seamless Container Integration" width="100%"></a>
</p>

### Get on the fast lane to cloud-native

Using Kubernetes with Jina is easy:

```bash
jina export kubernetes flow.yml ./my-k8s
kubectl apply -R -f my-k8s
```

And so is Docker Compose:

```bash
jina export docker-compose flow.yml docker-compose.yml
docker-compose up
```

> **Note**
> You can also export Deployment YAML to [Kubernetes](https://docs.jina.ai/concepts/executor/serve/#serve-via-kubernetes) and [Docker Compose](https://docs.jina.ai/concepts/executor/serve/#serve-via-docker-compose).

Likewise, tracing and monitoring with OpenTelemetry is straightforward:

```python
from docarray import DocumentArray
from jina import Executor, requests


class Encoder(Executor):
    @requests
    def encode(self, docs: DocumentArray, **kwargs):
        with self.tracer.start_as_current_span(
            'encode', context=tracing_context
        ) as span:
            with self.monitor(
                'preprocessing_seconds', 'Time preprocessing the requests'
            ):
                docs.tensors = preprocessing(docs)
            with self.monitor(
                'model_inference_seconds', 'Time doing inference the requests'
            ):
                docs.embedding = model_inference(docs.tensors)
```

You can integrate Jaeger or any other distributed tracing tools to collect and visualize request-level and application level service operation attributes. This helps you analyze request-response lifecycle, application behavior and performance.

To use Grafana, [download this JSON](https://github.com/jina-ai/example-grafana-prometheus/blob/main/grafana-dashboards/flow-histogram-metrics.json) and import it into Grafana:

<p align="center">
<a href="https://docs.jina.ai"><img src=".github/readme/grafana-histogram-metrics.png?raw=true" alt="Jina: Seamless Container Integration" width="70%"></a>
</p>

To trace requests with Jaeger:
<p align="center">
<a href="https://docs.jina.ai"><img src=".github/readme/jaeger-tracing-example.png?raw=true" alt="Jina: Seamless Container Integration" width="70%"></a>
</p>

What cloud-native technology is still challenging to you? [Tell us](https://github.com/jina-ai/jina/issues) and we'll handle the complexity and make it easy for you.

<!-- start support-pitch -->

## Support

- Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
- Join our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) meet-up to discuss your use case and learn Jina's new features.
    - **When?** The second Tuesday of every month
    - **Where?**
      Zoom ([see our public events calendar](https://calendar.google.com/calendar/embed?src=c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com&ctz=Europe%2FBerlin)/[.ical](https://calendar.google.com/calendar/ical/c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com/public/basic.ics))
      and [live stream on YouTube](https://youtube.com/c/jina-ai)
- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai)

## Join Us

Jina is backed by [Jina AI](https://jina.ai) and licensed under [Apache-2.0](./LICENSE).

<!-- end support-pitch -->
