Cutshort logo
Infogain logo
Sr Data Engineer
Infogain's logo

Sr Data Engineer

7 - 10 yrs
₹20L - ₹25L / yr
Bengaluru (Bangalore), Pune, Noida, Delhi, Gurugram, Noida
Skills
Data engineering
skill iconPython
SQL
Spark
PySpark
Cassandra
Groovy
skill iconAmazon Web Services (AWS)
Amazon S3
Windows Azure
Foundry
Good Clinical Practice
E2
R
palantir
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Infogain

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

one-to-one, one-to-many, and many-to-many
Chennai
5 - 10 yrs
₹1L - ₹15L / yr
AWS CloudFormation
skill iconPython
PySpark
AWS Lambda

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.

2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.

Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.

Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.

Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.

Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.

Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.

Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.

Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.

Experience or exposure to design and development using Full Stack tools.

Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.

Bachelor's or master's degree in computer science or related field.

 

 

Read more
Branch International
at Branch International
4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
7yrs+
₹50L - ₹70L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
ETL
ETL architecture
+5 more

Branch Overview


Imagine a world where every person has improved access to financial services. People could start new businesses, pay for their children’s education, cover their emergency medical bills – the possibilities to improve life are endless. 


Branch is a global technology company revolutionizing financial access for millions of underserved banking customers today across Africa and India. By leveraging the rapid adoption of smartphones, machine learning and other technology, Branch is pioneering new ways to improve access and value for those overlooked by banks. From instant loans to market-leading investment yields, Branch offers a variety of products that help our customers be financially empowered.


Branch’s mission-driven team is led by the co-founders of Kiva.org and one of the earliest product leaders of PayPal. Branch has raised over $100 million from leading Silicon Valley investors, including Andreessen Horowitz (a16z) and Visa. 

 

With over 32 million downloads, Branch is one of the most popular finance apps in the world.

 

Job Overview

Branch launched in India in early 2019 and has seen rapid adoption and growth. In 2020 we started building out a full Engineering team in India to accelerate our success here. This team is working closely with our engineering team (based in the United States, Nigeria, and Kenya) to strengthen the capabilities of our existing product and build out new product lines for the company.


You will work closely with our Product and Data Science teams to design and maintain multiple technologies, including our API backend, credit scoring and underwriting systems, payments integrations, and operations tools. We face numerous interesting technical challenges ranging from maintaining complex financial systems to accessing and processing creative data sources for our algorithmic credit model. 


As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making: We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.


Responsibilities

  • Make significant contributions to Branch’s data platform including data models, transformations, warehousing, and BI systems by bringing in best practices.
  • Build customer facing and internal products and APIs with industry best practices around security and performance in mind.
  • Influence and shape the company’s technical and product roadmap by providing timely and accurate inputs and owning various outcomes.
  • Collaborate with peers in other functional areas (Machine Learning, DevOps, etc.) to identify potential growth areas and systems needed.
  • Guide and mentor other younger engineers around you.
  • Scale our systems to ever-growing levels of traffic and handle complexity.


Qualifications

  • You have strong experience (8+ years) of designing, coding, and shipping data and backend software for web-based or mobile products.
  • Experience coordinating and collaborating with various business stakeholders and company leadership on critical functional decisions and technical roadmap.
  • You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage and processing, and agile development methodologies.
  • Have experience designing maintainable and scalable data architecture for ETL and BI purposes.
  • You are able to utlize your knowledge and expertise to code and ship quality products in a timely manner.
  • You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem.
  • You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise.
  • You are an excellent collaborator & communicator. You know that startups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. 
  • You are excited by the prospect of working in a distributed team and company, working with teammates from all over the world.

Benefits of Joining

  • Mission-driven, fast-paced and entrepreneurial environment
  • Competitive salary and equity package
  • A collaborative and flat company culture
  • Remote first, with the option to work in-person occasionally
  • Fully-paid Group Medical Insurance and Personal Accidental Insurance
  • Unlimited paid time off including personal leave, bereavement leave, sick leave
  • Fully paid parental leave - 6 months maternity leave and 3 months paternity leave
  • Monthly WFH stipend alongside a one time home office set-up budget
  • $500 Annual professional development budget 
  • Discretionary trips to our offices across the globe, with global travel medical insurance 
  • Team meals and social events- Virtual and In-person

Branch International is an Equal Opportunity Employer. The company does not and will not discriminate in employment on any basis prohibited by applicable law. We’re looking for more than just qualifications -- so if you’re unsure that you meet the criteria, please do not hesitate to apply!

 

Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
1. 2 to 4 years of experience
2. hands on experience using python, sql, tablaue
3. Data Analyst 
About Amagi (http://www.amagi.com/" target="_blank">www.amagi.com): Amagi is a market leader in cloud based media technology services for channel creation, distribution and ad monetization. Amagi’s cloud technology and managed services is used by TV networks, content owners, sports rights owners and pay TV / OTT platforms to create 24x7 linear channels for OTT and broadcast and deliver them to end consumers. Amagi’s pioneering and market leading cloud platform has won numerous accolades and is deployed in over 40 countries by 400+ TV networks. Customers of Amagi include A+E Networks, Comcast, Google, NBC Universal, Roku, Samsung and Warner Media. This is a unique and transformative opportunity to participate and grow a world-class technology company that changes the tenets of TV. Amagi is a private equity backed firm with investments from KKR (Emerald Media Fund), Premji Invest and MayField. Amagi has offices in New York, Los Angeles, London, New Delhi and Bangalore. LinkedIn page : https://www.linkedin.com/company/amagicorporation" target="_blank">https://www.linkedin.com/company/amagicorporation News: https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400" target="_blank">https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400/ Cofounder on Youtube: https://www.youtube.com/watch?v=EZ0nBT3ht0E" target="_blank">https://www.youtube.com/watch?v=EZ0nBT3ht0E
 

About Amagi & Growth


Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.

Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:

·   Annual revenue growth of 136%
·   44% increase in customers
·   50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
·   250+ channels added to its cloud platform taking the overall tally to more than 500
·   Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
·   60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
·   5-10x growth in ad impressions among top customers
 
Over the last 4 years, Amagi has grown more than 400%. Amagi now has an aggressive growth plan over the next 3 years - to grow 10X in terms of Revenue. In terms of headcount, Amagi is looking to grow to more than 600 employees over the next 1 year. Amagi is building several key organizational processes to support the high growth journey and has gone digital in a big way.
 
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Minakshi Kumari
Posted by Minakshi Kumari
Remote only
1 - 5 yrs
₹10L - ₹15L / yr
SQL
PySpark
Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
RandomTrees
at RandomTrees
1 recruiter
HariPrasad Jonnalagadda
Posted by HariPrasad Jonnalagadda
Remote, Hyderabad, Bengaluru (Bangalore)
9 - 20 yrs
₹10L - ₹15L / yr
Natural Language Processing (NLP)
skill iconData Science
skill iconMachine Learning (ML)
Computer Vision
recommendation algorithm

Expert in Machine Learning (ML) & Natural Language Processing (NLP).

Expert in Python, Pytorch and Data Structures.

Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).

Strong experience in NLP, NLU and NLU using transformers & deep learning.

Experience in federated learning is a plus

Experience with knowledge graphs and ontology.

Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.

Experience working with scalable, highly-interactive, high-performance systems/projects (ML).

Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.

Working closely with business partners in defining requirements for ML applications and advancements of solution.

Engage in specifications in creating comprehensive technical documents.

Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.

Experience in Test Driven Development & Agile methodologies.

Good communication skills - client facing environment.

Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.

Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹40L / yr
Data engineering
Data Engineer
AWS Lambda
Microservices
ETL
+8 more
Required Skills & Experience:
• 2+ years of experience in data engineering & strong understanding of data engineering principles using big data technologies
• Excellent programming skills in Python is mandatory
• Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus.
• Exposure to deploying ETL pipelines such as AirFlow, Docker containers & Lambda functions
• Experience in AWS loud services such as AWS CLI, Glue, Kinesis etc
• Experience using Tableau for data visualization is a plus
• Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) is a plus
• Motivated, can-do attitude and desire to make a change is a must
• Excellent communication skills
Read more
Artivatic
at Artivatic
1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
3 - 10 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
TensorFlow
+3 more
Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years
Read more
Data ToBiz
at Data ToBiz
2 recruiters
PS Dhillon
Posted by PS Dhillon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
Datawarehousing
Amazon Redshift
Analytics
skill iconPython
skill iconAmazon Web Services (AWS)
+2 more
Job Responsibilities :  
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.  
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.

Job Requirements :  
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Read more
SpotDraft
at SpotDraft
4 recruiters
Madhav Bhagat
Posted by Madhav Bhagat
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹3L - ₹24L / yr
skill iconPython
TensorFlow
caffee
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos