


About Teradata
About
Connect with the team
Similar jobs
We are seeking a motivated and detail-oriented Junior Backend Engineer to join our team. The role involves developing and maintaining backend systems using Django and Python, while also managing on-premises hardware and deployments. You will work closely with cross-functional teams to ensure our software is reliably configured, deployed, and maintained in an on-prem environment.
Key Responsibilities
Backend Development: Design, develop, and maintain backend services and APIs using Python (Django or Flask).
On-Prem Deployment: Install, configure, and manage software on on-premises hardware, including servers and networking equipment.
Database Management: Design database schemas, write efficient queries, and optimize data access layers (PostgreSQL, MySQL, etc.).
API Integration: Implement and maintain RESTful APIs and integrate with third-party services.
Troubleshooting & Debugging: Diagnose and resolve application and infrastructure issues to ensure high availability.
Version Control & CI/CD: Use Git for version control and contribute to continuous integration and delivery pipelines (GitHub Actions, Jenkins, etc.).
Security & Compliance: Work with the team to ensure software and infrastructure meet security and compliance requirements.
Documentation: Maintain clear, comprehensive documentation for your code, configurations, and on-prem deployment processes.
About the Role: LOOKING FOR QA ENGINEERS(ONLY FROM SAAS-BASED COMPANIES)
You will help the team in building an awesome testing platform that can be leveraged by QA for testing the product for quality & stability.
You should have hands-on software development experience to design and implement test automation infrastructure. You will have to implement test automation systems and provide seamless integration with DevOps infrastructure. You should have strong interpersonal and communication skills.
Location = Mumbai
Responsibilities:
Implementation of test automation for Web Interface and Mobile Applications
Implement functional and non-functional API Testing or integration testing
Carry out Load testing and Performance testing as required
Integration of test suite and testing tools with DevOps Infrastructure and CI/CD pipelines
Assist the QA team to be able to diagnose the test case failures
Follow agile test methodology for timely delivery of test case automation
Manage the upkeep of existing Automation Regression suite based on changes
Requirements:
Bachelor's degree in Computer Science, Information Technology or related field
0 to 1 year of relevant experience in the testing domain
Self-driven with the ability to work independently and carry out assignments as required.
Knowledge of Selenium or Appium framework with JAVA is a must-have
Programming ability in JAVA
Exposure to API testing, load testing, system integration will be a big plus
Experience in working on Linux/Unix OS and cloud environment such as AWS
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description:- Integrate gates into CI/CD pipeline and push all flaws/issues to developers IDE (as far left as possible) - ideally in code repo but required by the time code is in the artifact repository.
- Demonstrable experience in Containerization-Docker and orchestration (Kubernetes)
- Experience withsetting up self-managed Kubernetes clusters without using any managed cloud offerings like EKS
- Experience working withAWS - Managing AWS services - EC2, S3, Cloudfront, VPC, SNS, Lambda, AWS Autoscaling, AWS IAM, RDS, EBS, Kinesis, SQS, DynamoDB, Elastic Cache, Redshift, Cloudwatch, Amazon Inspector.
- Familiarity withLinux and UNIX systems (e.g. CentOS, RedHat) and command line system administration such as Bash, VIM, SSH.
- Hands on experience in configuration management of server farms (using tools such asPuppet, Chef, Ansible, etc.,).
- Demonstrated understanding of ITIL methodologies, ITIL v3 or v4 certification
- Kubernetes CKA or CKAD certification nice to have
Excellent communication skills
Open to work on EST time zone
- Core Java: advanced level competency, should have worked on projects with core Java development.
- Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands
- Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.
- Data structures and problem solving: should have ability to use appropriate data structure.
- AWS cloud : Good to have experience with aws serverless toolset along with aws infra
- Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)
- Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets
- HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.
- Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies
The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages
Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer

Location: Bangalore
We are looking for the right Backend Developer.
What you will work on Build a scalable API platform that will enhance our customer experience & propel our logistics. You will be part of our Bangalore team of ambitious and talented engineers, who put their best together to build architecturally sound & scalable systems.
What can CasaOne promise you –
An opportunity to - increase your rate of learning exponentially by defining hard problems and solving them - partake in a high-growth journey and increase revenues 5x+ Y-o-Y - be an early innovator in the shifting trend: ‘ownership economy’ -> ‘access economy’ - build a category-defining platform for FF&E (Furniture, Fixture, and Equipment) leasing - build high-performance teams
The must-haves
• Bachelor’s or Master’s degree in engineering
• Good understanding of algorithms, data structures & design patterns
• A minimum of 4 years of work experience Experience required in
• Building distributed systems & service-oriented architecture
• Asynchronous programming, Test Driven Development (TDD)
• Writing (delightful) APIs & integration patterns
• RDBMS & NoSql databases
• Continuous integration & deployment (CI/CD) tools like git, Jenkins
• Cloud computing platforms - AWS/ Azure/ Google Cloud
Good to know CasaOne backend services are written in NodeJS. Experience in NodeJS will be handy, but it isn’t mandatory.


- BS/MS in Computer Science or Engineering.
- 8+ years of experience in software development in an object-oriented language such as Java, .NET or Node.Js
- Exceptional design, coding, and problem-solving skills, with a bias for architecture at scale.
- Experience with HTML5, JavaScript, TypeScript, front-end technologies like AngularJS, Redux / React and upcoming web technologies.
- Real-world experience developing large scale commercial services with robust performance, resiliency, and telemetry, delivered both Online and OnPrem.
- Strong knowledge of computer science, algorithms, and design patterns.
- Ability to appreciate complex problems with a thorough design.



