
Have you streamed a program on Disney+, watched your favorite binge-worthy series on Peacock or cheered your favorite team on during the World Cup from one of the 20 top streaming platforms around the globe? If the answer is yes, you’ve already benefitted from Conviva technology, helping the world’s leading streaming publishers deliver exceptional streaming experiences and grow their businesses.
Conviva is the only global streaming analytics platform for big data that collects, standardizes, and puts trillions of cross-screen, streaming data points in context, in real time. The Conviva platform provides comprehensive, continuous, census-level measurement through real-time, server side sessionization at unprecedented scale. If this sounds important, it is! We measure a global footprint of more than 500 million unique viewers in 180 countries watching 220 billion streams per year across 3 billion applications streaming on devices. With Conviva, customers get a unique level of actionability and scale from continuous streaming measurement insights and benchmarking across every stream, every screen, every second.
What you get to do in this role:
Work on extremely high scale RUST web services or backend systems.
Design and develop solutions for highly scalable web and backend systems.
Proactively identify and solve performance issues.
Maintain a high bar on code quality and unit testing.
What you bring to the role:
5+ years of hands-on software development experience.
At least 2+ years of RUST development experience.
Knowledge of cargo packages for kafka, redis etc.
Strong CS fundamentals, including system design, data structures and algorithms.
Expertise in backend and web services development.
Good analytical and troubleshooting skills.
What will help you stand out:
Experience working with large scale web services and applications.
Exposure to Golang, Scala or Java
Exposure to Big data systems like Kafka, Spark, Hadoop etc.
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!

Similar jobs
Role & Responsibilities
You will be working with the Rust compiler and will be responsible for compiling to alternate targets such as WebAssembly
You will be exposed with TDD for unit testing individual functions and integration testing for testing publicly exposed APIs
Working with a Git style workflow where every commit deploys to a stagingenvironment and merged pull requests deploy to production
Setting up CI/CD pipelines for testing and deployment (canary, staging) using Github actionsaccording to project needs
Developing software in Rust
Maintaining and improving existing rust codebases.
The candidate should have extensive experience in designing and developing scalable data pipelines and real-time data processing solutions. As a key member of the team, the Senior Data Engineer will play a critical role in building end-to-end data workflows, supporting machine learning model deployment, and driving MLOps practices in a fast-paced, agile environment. Strong expertise in Apache Kafka, Apache Flink, AWS SageMaker, and Terraform is essential. Additional experience with infrastructure automation and CI/CD for ML models is a significant advantage.
Key Responsibilities
- Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink.
- Build scalable and automated MLOps pipelines for training, validation, and deployment of models using AWS SageMaker and associated services.
- Implement and manage Infrastructure as Code (IaC) using Terraform to provision and manage AWS environments.
- Collaborate with data scientists, ML engineers, and DevOps teams to streamline model deployment workflows and ensure reliable production delivery.
- Optimize data storage and retrieval strategies for large-scale structured and unstructured datasets.
- Develop data transformation logic and integrate data from various internal and external sources into data lakes and warehouses.
- Monitor, troubleshoot, and enhance performance of data systems in a cloud-native, fast-evolving production setup.
- Ensure adherence to data governance, privacy, and security standards across all data handling activities.
- Document data engineering solutions and workflows to facilitate cross-functional understanding and ongoing maintenance.
Responsibilities:
• Take on complex problems that span multiple components and teams.
• Independently own one or more multiple modules, which include: requirement analysis, design, development, maintenance & support
• Write extensive, efficient code to address complex modules that handle the interaction between multiple components.
• Rapidly iterate to add new functionalities and solves complex problems with simple and intuitive solutions
• Produce architecture with clean interfaces, that are efficient and scalable
• Participate and contribute to architectural discussions
• Solve production issues. Investigate and provide solutions to minimize the business impact due to the outage
• Continuously improve performance metrics of modules you own.
• Collaborate effectively across teams to solve problems, execute and deliver results
Requirements:
• Experience: 2+ years
• A Bachelor's or Master's Degree in Computer Science
• Software engineering and product delivery experience, with a strong background in algorithms
• Experience in architecting & building real-time, large-scale e-commerce applications
• Experience with high-performance websites catering to millions of daily traffic is a plus
• Excellent command over Data Structures and Algorithms
• Experience with web technologies, Go/Java/Python
• Language: GO or Python
• Strong expertise in DS and Algo
• Strong leadership skills - have experience mentoring, building products from scratch or incumbent in design and architecture.
• Have worked in the scaling of the system right from scratch.
• Someone worked in small user base to a large user base and wrote optimized code
• Both HLD and LLD
About Us:
Developed in formal collaboration with the University of Cambridge in May 2000, HeyMath! is an Ed-Tech company whose mission is to Raise the Game in Maths for school systems around the world. We do this using technology to deliver engaging teaching methodologies and personalised learning paths for students. HeyMath! has been successfully adopted by CBSE schools since 2004, with positive outcomes for the entire ecosystem.
Check us out at www.heymath.com
We plan to work mainly from home in 2022 and the virtual office atmosphere is collegiate, informal and friendly, with small high-impact teams making a difference to customers.
What we are looking for:
Experience in building and re-engineering cloud based solutions on AWS.
Strong knowledge of Object Oriented Programming(OOPS) and design patterns is a must. Hands-on development on Spring MVC framework.
Experience working on Java 8 or above.
Must have very good knowledge of RDBMS such as MySQL and performance tuning of the same.
Exposure to server-side and client-side caching mechanisms. Ability to debug the applications and provide instant workable solutions.
Experience working on Elastic Search / Kafka / Kubernetes or all is a nice to have.
Need ASAP Joiners!!
Interested Who can Join in maximum of 30 days please share your updated profile with the below-mentioned necessary details
Job Role : Backend Engineer / Sr. Backend Engineer
As a Senior Backend Platform Engineer, you will work closely with engineers/scientists in a small, but fast-growing, team environment while i) applying the state of the art AI technologies in the field of deep learning and computer vision to design and develop the user recommendation platform to help our users to find their best matches and ii) building scalable and reliable backend API services to serve massive volume of client traffic while ensuring best user experience. A successful candidate will have strong technical skills and a motivation to achieve results in a fast-paced environment.
In this Senior Backend Platform Engineer role, you will:
- Design, develop, and operate resilient distributed services that run on ECS or Kubernetes to serve hundreds of millions of users around the world
- Collaborate with various functional teams on expansion of our recommendation systems
- Influence the roadmap and product development of KlearNow App and services
- Recruit, inspire, and develop team members
We're looking for:
- BS/MS in Computer Science or equivalent with 1-5 or 5-10 years minimum industrial working experiences
- Excellent knowledge of Computer Science fundamentals, with strong competencies in data structures, algorithms, software design and coding
- Strong designing and building distributed backend systems handling high volumes of traffic
- Experience in handling ambiguous business requirements with excellent prioritization, time management abilities, and a focus on execution
- Passion to solve complex problems and make continuous improvements
- Experience in Search and Recommendation Systems preferred
- Excellent knowledge and experience on a backend language, like Java, Scala, Go, Node.js, etc.
Bonus points if you have:
- Experience in design and development using NoSQL, such as DynamoDB, Cassandra, or RocksDB
- Experience with Aerospike, ElasticSearch, Kafka, and Spark
- Experience with Agile development methodology and CI/CD
- Experience with Docker containers along with Kubernetes or ECS
If you wish to know more on the opportunity and like to pursue this further, then I would suggest that you share your updated CV and suggest a good time when we could connect.
Why us
KlearNow is operational and a certified Customs Business provider in US, Canada and UK with plans to grow in many more markets in near future. Be a part of a rapidly growing company where you will have the opportunity to extend our leadership position and fast-track innovation behind AI-powered intelligent supply chain solutions.
Over and above the core customs clearance solution, KlearNow is also providing consolidated freight visibility, data and document management and intra-port activity for efficient Drayage.
To know more about KlearNow, please visit our website https://www.klearnow.com/">https://www.klearnow.com/
KlearNow has a flexibility of a small start-up with the security of a well-funded organization with strong backers and advisors.
KlearNow is transforming B2B supply chains with its smart Logistics as a Service (LaaS) platform that connects data, people, processes, and organizations. Its AI-powered platform digitizes paper-based transactions—streamlining customs clearance and drayage services. KlearNow empowers importers, exporters, freight forwarders, and supply chain partners by providing new levels of visibility and productivity to reduce costs and create enhanced customer experiences
Over and above the core customs clearance solution, KlearNow is also providing consolidated freight visibility, data and document management and intra-port activity for efficient Drayage.
KlearNow grows, we’re looking for an Engineering Leader who is passionate about people growth and organizational development. This is a unique opportunity to help build the products and practices that will support KlearNow growth over the next round of funding. In this role, you will reimagine enterprise logistics software and build global products that enable KlearNow expand in Americas, Europe and Asia.
Be a part of a rapidly growing, series B funded tech company where you will have the opportunity to extend our leadership position and fast-track innovation behind AI-powered intelligent supply chain solutions.
Primary Responsibilities
- Design, architect and develop advanced software solutions in a cross functional Agile team supporting multiple projects and initiatives
- Collaborate with product owners and/or the business on requirements definition, development of functional specifications, and design
- Collaborate on or lead development of technical design and specifications as required
- Code, test and document new applications as well as changes to existing system functionality and ensure successful completion
- Take on leadership roles as needed
Skills & Requirements
- Bachelor’s Degree required, preferably in Computer Science or related field
- 3+ years of software development experience using GoLang/Java programming language
- Experience with cloud technologies (AWS/Azure/GCP/Pivotal Cloud Foundry/any private cloud) and containerization is required
- Experience with a micro-services architecture is a plus
- Excellent communication, collaboration, reporting, analytical and problem solving skills
- Experience with PostgreSQL or other Relational Databases
- Test-driven development mindset and a focus on quality, scalability and performance
- Strong programming fundamentals and ability to produce high quality code
- Solid understanding of Agile (SCRUM) Development Process required
Job Description:
- Programming and optimizing rust / wasm based smart contracts
- Design, research and develop blockchain-based solutions
- Developing decentralized high-performance systems
- Building reliable and fast data storages
- Working with virtual machines used by modern blockchains: WebAssembly, EVM, COSMWASM
- Implementing consensus algorithms and other protocols
- Security audits of third-party and internal solutions
- Establishing policies and procedures that produce secure, high-quality software
- Write and review technical proposals
- Improve engineering standards, tooling, and processes
- Coding with concurrency, efficiency and scalability as primary motive
- Document systems, build runbooks, and automate those processes
- Being hands on by writing, testing, and deploying high-performance networking code
- Rigor on clean code, unit testing, code coverage and best practices
- Developing infrastructure software
- Supporting and Leading the team of junior developers.
Required qualifications:
3+ years experience in developing smart contracts, 2.5+ years of experience in Rust, willingness to learn on the go, ability to write clean code, a strong sense of responsibility.
Tech Stack:
Rust, cargo, git, linux, bash, ability to work with Docker.
Nice to have:
cryptography and system software development experience,understanding of design patterns, understanding of operating systems and networks, ability to design algorithms and mathematical models.
Objective of the Role:
We are here to build a world-class tech organization with elite engineers and change-agents who would spearhead this change. Currently, we are looking for engineers who are skilled, passionate, driven and a wee bit crazy (yes, crazy works!) to join our tribe. The current position is for the supply chain teams and our primary focus is on scale and cost optimization. The small tweaks you make, the processes you alter, experiments you run and the business decisions you drive will have reverberating effects on our ability to add value to our customers and keep them coming back for more.
Role & Responsibilities
- Design and build the system which enables the logistics team to store and deliver 15 million products per month to customers across 20 cities in India
- Work on the vision, roadmap, and processes that make customer delivery experience more delightful
- Work across teams to design a platform that scales and is flexible enough for various kinds of future scenarios
- Work on optimizing the whole logistics supply chain from warehouse to customer
- Innovate to improve the efficiency of the existing supply chain systems
Desired skills & abilities: - 3-6 years of experience in software development
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
- Experience in architecture and system design 2Experience in running high performance distributed systems
- Understanding and implementation of security and data
- Highly experienced in back-end programming languages like Python, Java, JavaScript
- Experience with cloud message APIs and usage of push
- Knowledge of code versioning tools such as Git, Mercurial or SVN
- Solid experience in software development practices
- Ability to mentor and manage teams
- Exposure to Agile/Scrum and Design thinking approaches
Software engineering takes the central roles for building products and systems. You will be responsible for designing, building, improving, or maintaining our web applications, third-party data integration, data API, backend systems, or working with monitoring tools and infrastructure may work on our search engine including scoring and relevance, reservation engine, automated pricing engine, business process engine, data applications, devops-related applications, and other
new projects.
You will work in cross-functional teams and meet great people regularly from top tier technology, consulting, product, or academic background. We work in open environment where there are no boundaries or power distance. Everyone is encouraged to speak their mind, propose ideas, influence others, and continuously grow themselves. Get the exposure to multi-aspect, collaborative, intensive startup experience with our recent expansion into Southeast Asia and exploration of new product
Qualifications
Having minimal 3 years of experience in software engineering, application development, or system development
Excellent understanding of software engineering concepts, design patterns, and algorithms
Comfortable working up and down the technology stack
Curiosity to explore creative solutions and try new things
Bachelors' degree in Computer Science or equivalent from a reputable university











