Python Jobs in Hyderabad
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
About the company
A strong cross-functional team of designers, software developers, and hardware experts who love creating technology products and services. We are not just an outsourcing partner, but with our deep expertise across several business verticals, we bring our best practices so that your product journey is like a breeze.
We love healthcare, medical devices, finance, and consumer electronics but we love almost everything where we can build technology products and services. In the past, we have created several niche and novel concepts and products for our customers, and we believe we still learn every day to widen our horizons!
Introduction - Advanced Technology Group
As an extension to solving the continuous medical education needs of doctors through the courses platform, Iksha Labs also developed several cutting-edge solutions for simulated training and education, including
- Virtual Reality and Augmented Reality based surgical simulations
- Hand and face-tracking-based simulations
- Remote immersive and collaborative training through Virtual Reality
- Machine learning-based auto-detection of clinical conditions from medical images
Introduction - Advanced Technology Group
As an extension to solving the continuous medical education needs of doctors through the courses platform, Iksha Labs developed several cutting-edge solutions for simulated training and education, including
- Virtual Reality and Augmented Reality based surgical simulations
- Hand and face-tracking-based simulations
- Remote immersive and collaborative training through Virtual Reality
- Machine learning-based auto-detection of clinical conditions from medical images
Job Description
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code.
Key Skills/Technology
- Good command of C, and C++ with Algorithms and Data Structures
- Image Processing
- Qt (Expertise)
- Python (Expertise)
- Embedded Systems
- Good working knowledge of STL/Boost Algorithms and Data structures
Responsibilities
- Develop quality software and web applications
- Analyze and maintain existing software applications
- Develop scalable, testable code
- Discover and fix programming bugs
Qualifications
Bachelor's degree or equivalent experience in Computer Science/Electronics and Communication or a related field.
Industry Type
Medical / Healthcare
Functional Area
IT Software - Application Programming, Maintenance
Key responsibilities:
- Design and implement cloud solutions, build MLOps on cloud (AWS, Azure, or GCP),
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools;
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality.
- Data science models testing, validation and tests automation.
- Communicate with a team of data scientists, data engineers and architect, document the processes.
Required Qualifications:
- Ability to design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions (AWS, MS Azure or GCP)
- Experience with MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., experience with Docker and Kubernetes, OpenShift.
- Programming languages like Python, Go, Ruby or Bash, good understanding of Linux, knowledge of frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Ability to understand tools used by data scientist and experience with software development and test automation.
- Fluent in English, good communication skills and ability to work in a team.
Desired Qualifications:
- Bachelor’s degree in computer science or software Engineering
- Experience in using AWS, MS Azure or GCP services.
- Good to have any associate Cloud Certification
Company Overview: An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.
Location: Hyderabad
Position: Python Developer
Experience: 8+ years of commercial experience
Mandatory skills: Python, Django
Interview Process:
Offline Tech Test
Customer technical Round
CTO Discussion
Job Description:
5 to 8 plus years of experience as a Python developer.
Expert knowledge of Python and related frameworks including Django and Flask.
A deep understanding and multi-process architecture and the threading limitations of Python.
Familiarity with server-side templating languages including Jinja 2 and Mako.
Ability to integrate multiple data sources into a single system.
Familiarity with testing tools.
Ability to collaborate on projects and work independently when required.
DB (MySQL, Postgress),
mongo db ,pandas, NumPy,
A leader in Cognitive and Emerging Technologies Business
Job Title: Sr. Data Engineer
Experience: 5 to 8 years
Work Location: Hyderabad (option to work remotely)
Skillset: Python, PySpark, Kafka, Airflow, Sql, NoSql, API Integration,Data pipeline, Big Data, AWS/ GCP/ OCI/Azure
Selection Process:
1. Assignment
2. Tech Round -I
3. Tech Round - II
4. HR Round
Calling out Python ninjas to showcase their expertise in a stimulating environment, geared towards building cutting-edge products and services. If you have a knack for data processing, scripting and
are excited about delivering a scalable, high-quality data ingestion, API Integration solutions, then we are looking for you!
You will get a chance to work on exciting projects at our state-of-the-art office, grow along with the company and be fruitfully rewarded for your efforts!
Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development..
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
- SQL -> MSSQL SERVER -> 6 Years.
- Databricks -> 3 Years
- Azure Data Factory -> 5 Years.
- SparkSQL -> 3 Years.
- Python -> Good command.
- Tableau -> Good command.
- Should be good at Tableau server/DevOps and Tableau Desktop.
- The candidate should ideally work on Productionizing the dashboards, troubleshooting, and Tableau Server Management for SCAN.
Airflow developer:
Exp: 5 to 10yrs & Relevant exp must be above 4 Years.
Work location: Hyderabad (Hybrid Model)
Job description:
· Experience in working on Airflow.
· Experience in SQL, Python, and Object-oriented programming.
· Experience in the data warehouse, database concepts, and ETL tools (Informatica, DataStage, Pentaho, etc.).
· Azure experience and exposure to Kubernetes.
· Experience in Azure data factory, Azure Databricks, and Snowflake.
Required Skills: Azure Databricks/Data Factory, Kubernetes/Dockers, DAG Development, Hands-on Python coding.
Essential Duties and Responsibilities:
- Build data systems and pipelines
- Prepare data for ML modeling
- Combine raw information from different sources
- Conduct complex data analysis and report on results
- Build data systems and pipelines.
Work Experience :
- 3 years of experience working with Node, AI/ML & Data Transformation Tools
- Hands on experience with ETL & Data Visualization tools
- Familiarity with Python (Numpy, Pandas)
- Experience with SQL & NoSQL DBs
Must Have : Python, Data warehouse tool , ETL, SQL/MongoDB, Data modeling, Data transformation, Data visualization
Nice to have: MongoDB/ SQL, Snowflake, Matillion, Node.JS, ML model building
I am looking for immediate joiners. Kindly share your notice period information while applying.
The candidate needs to be strong on SQL Query and POWERBI
- Experience with PowerBI (Important)
- Experience with Relational databases like Mysql, MSSQL, etc (Important)
- Any scripting knowledge like (Python, shell scripting, etc) (Important)
- Experience on db connectors across multiple platforms.
- Knowledge on AWS Redshift is a plus
Roles and Responsibilities
Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Skills
Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis.
Job Role-Big Data Engineer
Experience-4+ years of relevant experience into Big Data
Must have minimum of 4 years of hands on programming experience in any one or more programing languages.
Must have experience in Scala and python.
Good experience with Distributed systems like Hadoop, HDFs and NoSQL databases
- Must have experience in Spark, Hive or Presto
- Must have experience with AWS Cloud services: EC2, EMR, Athena, AWS S3.
- Experience with Dynamo DB, RDS.
- Must have Hands on experience with AWS Lambdas
- Designing and developing dashboards using Spunk
- Competent in design/implementation for reliability, availability scalability and performance.
- Deep problem-solving skills to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Key Sills Required for Lead DevOps Engineer
Containerization Technologies
Docker, Kubernetes, OpenShift
Cloud Technologies
AWS/Azure, GCP
CI/CD Pipeline Tools
Jenkins, Azure Devops
Configuration Management Tools
Ansible, Chef,
SCM Tools
Git, GitHub, Bitbucket
Monitoring Tools
New Relic, Nagios, Prometheus
Cloud Infra Automation
Terraform
Scripting Languages
Python, Shell, Groovy
· Ability to decide the Architecture for the project and tools as per the availability
· Sound knowledge required in the deployment strategies and able to define the timelines
· Team handling skills are a must
· Debugging skills are an advantage
· Good to have knowledge of Databases like Mysql, Postgresql
It is advantageous to be familiar with Kafka. RabbitMQ
· Good to have knowledge of Web servers to deploy web applications
· Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning
· Advantage to having experience in DevSecOps
Note: Tools mentioned in bold are a must and others are added advantage
Hi!
We are “Potentiam”- a hyper-growth service provider for global clients in the technology space.
About the Client- They are a product based company-the largest global player in providing climate solutions through innovative technology.
One more thing- it’s a full-time job
As a UI Python Automation Tester -
What skills should you have?
- Amazing skills in python Automation Testing
- Tried and trusted Selenium Automation Tools (Intermediate)
- You possess great communication skills
- Expert in Test case management tools like JIRA, GIT, using GIT flow or a similar branching model
- Experience developing in a CI/CD process
Who are you?
- You are a gifted individual contributor and a great team player
- You troubleshoot and report on issues within hosted and local environments
- You take ownership of tasks and are happy to manage them through to completion
Skills
JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
Company Description
Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
Job Responsibilities:
- Design and implement machine learning, information extraction, probabilistic matching algorithms and models
- Research and develop innovative, scalable and dynamic solutions to hard problems
- Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
- Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
- Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
- Be a valued contributor in shaping the future of our products and services
- You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
- Be part of a fast pace, fun-focused, agile team
Job Requirement:
- 4+ years of industry experience
- Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
- Strong mathematics, statistics, and data analytics
- Solid coding and engineering skills preferably in Machine Learning (not mandatory)
- Proficient in Java, Python, and Scala
- Industry experience building and productionizing end-to-end systems
- Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
- Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.
Position Summary
We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.
- Capable of building accurate machine learning models is the main goal of a machine learning engineer
- Linear Algebra, Applied Statistics and Probability
- Building Data Models
- Strong knowledge of NLP
- Good understanding of multithreaded and object-oriented software development
- Mathematics, Mathematics and Mathematics
- Collaborate with Data Engineers to prepare data models required for machine learning models
- Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
- Build large-scale software systems and numerical computation topics
- Use predictive analytics and data mining to solve complex problems and drive business decisions
- Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
- Tackle situations where problem is unknown and the Solution is unknown
- Solve analytical problems, and effectively communicate methodologies and results to the customers
- Adept at translating business needs into technical requirements and translating data into actionable insights
- Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.
Benefits
- Competitive salary for a startup
- Gain experience rapidly
- Work directly with the executive team
- Fast-paced work environment
About Phenom People
At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.
Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.
We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.
Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.
We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.
Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.
Kindly explore about the company phenom ( https://www.phenom.com/ )
Youtube - https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/
Phenom | Talent Experience Management
We are looking for a Senior Software Developer to produce and implement functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects.
In this role, you should be able to work independently with little supervision. You should have excellent organization and problem-solving skills. If you also have hands-on experience in software development and agile methodologies, we’d like to meet you.
Your goal will be to develop high-quality software that is aligned with user needs and business goals.
Responsibilities
- Develop high-quality software design and architecture
- Identify, prioritize and execute tasks in the software development life cycle
- Develop tools and applications by producing clean, efficient code
- Automate tasks through appropriate tools and scripting
- Review and debug code
- Perform validation and verification testing
- Collaborate with internal teams and vendors to fix and improve products
- Document development phases and monitor systems
- Ensure software is up-to-date with the latest technologies.
Requirements and skills
- Proven experience as a Senior Software Engineer
- Extensive experience in software development, scripting and project management
- Knowledge of selected programming languages (e.g. Django, Python, C++) and the elastic search.
- Analytical mind with problem-solving aptitude
- Ability to work independently
- Excellent organizational and leadership skills
- BSc/BA in Computer Science or a related degree
POSITION SUMMARY:
We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.
EXPERIENCE
7- 9+ Years – Software Engineer III
PRIMARY RESPONSIBILITIES:
-
Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.
-
Work closely with the customers and understand the requirements and get it done on timelines.
-
Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.
-
Work closely with the Team and complete the deliverables on-time
-
Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.
-
Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.
-
Drive automation efforts for the configuration and maintainability of the public/private Cloud.
-
Lead product selection for replacement or new technologies
-
Address user tickets in a timely manner for the covered services
-
Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.
-
Migration and consolidations of infrastructure
-
Design and implement major service and infrastructure components.
-
Research, investigate and define new areas of technology to enhance existing service or new service directions.
-
Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.
-
Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.
-
Ability to take ownership on activities and new initiatives.
-
Infra Global Support from India towards product Development teams.
-
On-call support on a rotational basis for a global turn-around time-zones
-
Vendor Management for all latest hardware and software evaluations keep the system up-to-date.
KNOWLEDGE, SKILLS AND ABILITIES:
-
Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.
-
Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations
-
IaaS- Infrastructure as a service, Metal as service, Platform service
-
Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies
-
Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)
-
DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms
-
Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment
-
Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.
-
Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect
-
Networking Skills – Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)
-
Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)
-
Experience with Red Hat Ceph storage.
-
A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies
-
SME - subject matter expert for all cutting-edge technologies
-
Data center architect professional & Storage Expert level Certified professional experience .
-
A solid understanding of high availability systems, redundant networking and multipathing solutions
-
Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.
-
A Working experience in Object – Block – File storage Technologies
-
Experience in Backup Technologies and backup administration.
-
Dell/HP/Cisco UCS server’s administration is an additional advantage.
-
Ability to quickly learn and adopt new technologies.
-
A very very story experience and exposure towards open-source flatforms.
-
A working experience on monitoring tools Zabbix, nagios , Datadog etc ..
-
A working experience on and BareMetal services and OS administration.
-
A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.
-
A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)
-
A working experience with systems engineering and Linux /Unix administration
-
A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL
-
A working experience with automation/configuration management using either Puppet, Chef or an equivalent
-
A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories
-
Experience in Build system process and Code-inspect and delivery methodologies.
-
Knowledge on creating Operational Dashboards and execution lane.
-
Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services
-
SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
-
Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.
-
Experience in handling On-call Support and hierarchy process.
-
Knowledge on scale-out and scale-in architecture.
-
Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.
-
Knowledge on Agile and Scrum principles
-
Working experience with ServiceNow
-
Knowledge sharing, transition experience and self-learning Behavioral.
We have openings for Fullstack / Backend / Frontend Developers who can write reliable, scalable, testable and maintainable code.
At Everest, we innovate at the intersection of design and engineering to produce outstanding products. The work we do is meaningful and challenging - which makes it interesting. Imagine each line of your code, making the world a better place. We work on five workdays weeks, and overtime is a rarity. If clean architecture, TDD, DDD, DevOps, Microservices, Micro-frontends, scalable systems resonate with you, please apply.
To see the quality of our code, you can checkout some of our open source projects: https://github.com/everest-engineering
If you want to know more about our culture:
https://github.com/everest-engineering/manifesto
Some videos that can help:
https://www.youtube.com/watch?v=A7y9RpqXAdA;
- Passion to own and create amazing product.
- Should be able to clearly understand the customer's problem.
- Should be a collaborative problem solver.
- Should be able a team player.
- Should be open to learn from others and teach others.
- Should be a good problem solver.
- Should be able to take feedback and improve continuously.
- Should commit to inclusion, equity & diversity.
- Should maintain integrity at work
-
Familiarity with Agile methodologies and clean code.
-
Design and/or contribute to client-side and server-side architecture.
-
Well versed with fundamentals of REST.
-
Build the front-end of applications through appealing visual design.
-
Knowledge of one or more front-end languages and libraries (e.g. HTML / CSS, JavaScript, XML, jQuery, Typescript) JavaScript frameworks (e.g. Angular, React, Redux, Vue.js)
-
Knowledge of one or more back-end languages (e.g. C#, Java, Python, Go, Node.js and frameworks like SpringBoot, .NET Core)
-
Well versed with fundamentals of database design.
-
Familiarity with databases - RDBMS like MySQL, Postgres & NoSQL like MongoDB, DynamoDB.
-
Well versed with one or more cloud platforms like - AWS, Azure, GCP.
-
Familiar with Infrastructure as Code - CloudFormation & Terraform & deployment tools like Docker, Kubernetes.
-
Familiarity with CI/CD tools like Jenkins, CircleCI, Github Actions..
-
Unit testing tools like Junit, Mockito, Chai, Mocha, Jest
deals in e-commerce, mobile applications, cloud computing
Job Description:
We are seeking a highly skilled and motivated malware analyst to join our cybersecurity team. The ideal candidate will have a strong background in static and dynamic malware analysis, as well as experience analyzing PE and non-PE files.
In this role, you will be responsible for identifying, analyzing, and mitigating malware threats. This will involve using a variety of tools and techniques to reverse engineer malware samples, analyze their behavior, and develop countermeasures to prevent future infections. You will also work closely with our incident response team to investigate and contain malware outbreaks.
Key Responsibilities:
- Analyze malware samples using static and dynamic analysis techniques
- Reverse engineer malware to understand its behavior and identify potential vulnerabilities
- Analyze PE and non-PE files to identify malicious code and behaviors
- Collaborate with the incident response team to investigate and contain malware outbreaks
- Develop and maintain documentation on malware threats and countermeasures
- Stay up-to-date with the latest malware trends and techniques
Requirements:
- Bachelor's degree in computer science or a related field
- 3+ years of experience in malware analysis
- Proficiency in static and dynamic malware analysis techniques
- Experience analyzing PE and non-PE files
- Strong understanding of computer and network security principles
- Proficiency in programming languages such as C, C++, and Python
- Strong problem-solving and communication skills
- Ability to work independently and as part of a team
Preferred Qualifications:
- Master's degree in computer science or a related field
- Experience with incident response and digital forensics
- Knowledge of assembly language and reverse engineering tools such as IDA Pro
- Experience with threat intelligence analysis and reporting
-
Can write reliable, scalable, testable and maintainable code.
-
Familiarity with Agile methodologies and clean code.
-
Design and/or contribute to client-side and server-side architecture.
-
Well versed with fundamentals of REST.
-
Build the front-end of applications through appealing visual design.
-
Knowledge of one or more front-end languages and libraries (e.g. HTML / CSS, JavaScript, XML, jQuery, Typescript) JavaScript frameworks (e.g. Angular, React, Redux, Vue.js)
-
Knowledge of one or more back-end languages (e.g. C#, Java, Python, Go, Node.js and frameworks like SpringBoot, .NET Core)
-
Well versed with fundamentals of database design.
-
Familiarity with databases - RDBMS like MySQL, Postgres & NoSQL like MongoDB, DynamoDB.
-
Well versed with one or more cloud platforms like - AWS, Azure, GCP.
-
Familiar with Infrastructure as Code - CloudFormation & Terraform & deployment tools like Docker, Kubernetes.
-
Familiarity with CI/CD tools like Jenkins, CircleCI, Github Actions.
-
Unit testing tools like Junit, Mockito, Chai, Mocha, Jest
At Everest, we innovate at the intersection of design and engineering to produce outstanding products. The work we do is meaningful and challenging - which makes it interesting. Imagine each line of your code, making the world a better place. We work on five workdays weeks, and overtime is a rarity. If clean architecture, TDD, DDD, DevOps, Microservices, Micro-frontends, scalable systems resonate with you, please apply.
To see the quality of our code, you can checkout some of our open source projects: https://github.com/everest-engineering
If you want to know more about our culture:
https://github.com/everest-engineering/manifesto
Some videos that can help:
https://www.youtube.com/watch?v=A7y9RpqXAdA;
- Passion to own and create amazing product.
- Should be able to clearly understand the customer's problem.
- Should be a collaborative problem solver.
- Should be able a team player.
- Should be open to learn from others and teach others.
- Should be a good problem solver.
- Should be able to take feedback and improve continuously.
- Should commit to inclusion, equity & diversity.
- Should maintain integrity at work.
Requirements:
- Can write reliable, scalable, testable and maintainable code.
- Familiarity with Agile methodologies and clean code.
- Design and/or contribute to client-side and server-side architecture.
- Well versed with fundamentals of REST.
- Build the front-end of applications through appealing visual design.
- Knowledge of one or more front-end languages and libraries (e. g. HTML / CSS, JavaScript, XML, jQuery, Typescript) JavaScript frameworks (e. g. Angular, React, Redux, Vue.js )
- Knowledge of one or more back-end languages (e. g. C#, Java, Python, Go, Node.js and frameworks like SpringBoot, . NET Core)
- Well versed with fundamentals of database design.
- Familiarity with databases - RDBMS like MySQL, Postgres & NoSQL like MongoDB, DynamoDB.
- Well versed with one or more cloud platforms like - AWS, Azure, GCP.
- Familiar with Infrastructure as Code - CloudFormation & Terraform & deployment tools like Docker, Kubernetes.
- Familiarity with CI/CD tools like Jenkins, CircleCI, Github Actions. Unit testing tools like Junit, Mockito, Chai, Mocha, Jest.
SDE II BE
GradRight is an ed-fin-tech startup focused on global higher education. Using data science, technology and strategic partnerships across the industry, we are enabling students to find the “Right University” at the “Right Cost”. We are on a mission to aid a million students to find their best-fit universities and financial offerings by 2025.
Our flagship product - FundRight is the world’s first student loan bidding platform. In a short span of 10 months, we have facilitated disbursements of more than $ 50 million in loans this year and we are poised to scale up rapidly.
We are launching our second product - SelectRight as an innovative approach to college selection and student recruitment for students and universities, respectively. The product rests on the three pillars of data science, transparency and ethics and hopes to create a lot of value for students and universities.
Brief:
We are pursuing a complex set of problems that involve building for an international audience and for an industry that has largely been service-centric. As an SDE at GradRight, you’ll bring an unmatched customer-centricity to your work, with a focus on building for the long term and large scale.
You’ll contribute to frameworks that enable flexible/scalable customer journeys and tying them with institutional knowledge to help us build the best experiences for students and our partners.
Responsibilities:
- Contribute to design discussions and decisions around building scalable and modular architecture
- Build clean, modular and scalable backend services
- Participate in sprint ceremonies and actively contribute to scaling the engineering organization from a process perspective
- Stay on top of the software engineering ecosystem and propose new technologies/methodologies that can be adopted
- Contribute to engineering hiring by conducting interviews
- Champion infrastructure as code mindset and encourage automation
- Manage and mentor a small team of junior engineers
Requirements:
- At least 4 years experience as an SDE, building large scale services
- Extensive experience in at least one programming language and ability to write maintainable, scalable and unit-testable code
- Strong understanding of software design principles and patterns
- Excellent command over data structures and algorithms
- Passion for solving complex problems
- Good understanding of various database technologies with strong opinions around their use cases
- Experience with performance monitoring and scaling backend services
- Experience with microservices and distributed systems in general
- Experience with team management
- Excellent written and verbal communication skills
Good to have:
- Experience working with node.js, MongoDB and Google Cloud
- Experience in CI/CD and cloud infrastructure management
- Worked on products that addressed an international audience
- Worked on products that scaled to millions of users
Location: Hyderabad
Additional resources:
- Our team - https://gradright.com/team.html
- Customer testimonials - https://gradright.com/testimonials.html
Established in 2014, they are a global technology services company that delivers Digital Transformation, Cloud, Data and Insights, Cybersecurity, and Strategic Staffing solutions that improve customers’ businesses and bottom line. They are a team of over 400 people, headquartered in the US and one of the operating countries is India.
Location: Hyderabad
Budget: 30-45LPA
Position: Python Lead
Experience: 9+ years
Mandatory Skills: Python, Python Script, HTML, CSS, JavaScript, Django, Microservices, React or Angular JS
Hands on Experience working in architecture involving one or more of the following concepts and their implementation:
XML/JSON message processing REST API, GraphQL Object Relational Mapping Asynchronous web services and distributed message queues.
Interview Process:
Technical Round
Customer Round
Roles and Responsibilities
Skills
Prelude
We are BeyondScale, on a mission to build a mobile learning app to help organizations create internal courses for their workforce easily. eLearning is booming and we aim to tap into the under-served non-IT L&D market and make a difference in the livelihoods of millions of people.Job Description:
- 2+ years of experience coding with Python.
- Design, build, and maintain efficient, reusable, and reliable code.
- Eager and proactive to learn new technical skills.
- Hands-on experience of developing web APIs and writing database queries in PostgreSQL (MongoDB, MySQL and DynamoDB is a plus).
- Good understanding of OOPs, Multiprocessing and threading.
- Proficient in testing and debugging programs.
- Well-versed with Git and modern development workflow practices
Hands-on experience with RESTful services, API design are must.
Knowledge of microservices architecture is must.
Knowledge of design patterns is a must.
Strong knowledge of Exception handling and logging mechanism is a must.
Agile scrum participation experience. Work experience with several agile teams on an application built
with microservices and event-based architectures to be deployed on hybrid (on-prem/cloud)
environments.
Good knowledge of Spring framework (MVC, Cloud, Data and Security. Etc) and ORM frameworks like JPA/Hibernate.
Experience in managing the Source Code Base through Version Control tools like SVN, GitHub,
Bitbucket, etc.
Experience in using and configuration of Continuous Integration tools Jenkins, Travis, GitLab, etc.
Experience in the design and development of SaaS/PaaS-based architecture and tenancy models.
Experience in SaaS/PaaS-based application development used by a high volume of
subscribers/customers.
Awareness and understanding of data security and privacy.
Experience in performing Java Code Reviews using review tools like SonarQube, etc.
Good understanding of end-to-end software development lifecycle. Ability to read and understand
requirements and design documents.
Good Analytical skills and should be self-driven.
Good communication with interpersonal skills.
Open to learning new technologies and domains.
A good team player and ready to take up new challenges. Active communication and coordination with
Clients and Internal stakeholders
Requirements: Skills and Qualifications
6-8 years of experience in developing Java/J2EE-based Enterprise Web Applications
Languages: Java, J2EE, and Python
Databases: MySQL, Oracle, SQL Server, PostgreSQL, Redshift, MongoDB
DB Script: SQL and PL/SQL
Frameworks: Spring, Spring Boot, Jersey, Hibernate and JPA
OS: Windows, Linux/Unix.
Cloud Services: AWS and Azure
Version Controls/ DevOps tools: Git, Bitbucket and Jenkins.
Message brokers: RabbitMQ, and Kafka
Deployment Servers: Tomcat, Docker, and Kubernetes
Build Tools: Gradle/Maven
Expected to know CI/CD, Git etc.
Developer & Deploy it to Test (experience in Docker; simple SSH knowledge).
OSS Engineer. Very good understanding of Rest based API to write scripts to various monitoring tools to get the relevant data; Data collected - how to make it available to others through API.
Overall experience 5-8 years.
Location - Hyderabad
At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation.
Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive.
-
Design, coding, enhancements, and bug fixing in DNS and related areas.
-
Bring up new ideas to improve the day-to-day challenges in the design/functionality.
-
Should be able to provide technical direction to the ongoing and future projects in the team.
-
Keep the product vulnerability free by uplifting/fixing the open issues.
-
Build tools and infrastructure to improve these F5’s components and features.
-
Set an example of software design and development innovation and excellence.
-
Research, investigate and define new areas of technology to enhance existing or new products directions
-
Evaluate performance of products to fine-tune, and refactor the design as per the requirements to scale up.
-
Must have worked in security and related areas.
-
Document software designs via functional specifications and other design documents
-
Conduct presentations internal and external, mentoring the team members
-
May participate in the hiring and onboarding process
-
Collaborate with team members and technical leads
-
Responsible for upholding F5’s Business Code of Ethics and for promptly reporting violations of the Code or other company policies.
Knowledge, Skills and Abilities:
Essential
-
Deep understanding of data structures & algorithm.
-
Expert in C, C++ with hands-on experience
-
Fair understanding of scripting languages Python and JavaScript
-
Expertise in Linux user-level programming and exposure to Linux networking stack.
-
Good understanding on TCP/IP concepts.
-
Proven experience with security standards.
-
Excellent analytical and problem-solving skills.
-
Good understanding of Network security and DNS modules
-
Excellent Understanding of networking technologies and OS internals.
-
Prior experience in leading and delivering project/programs involving multiple teams.
-
Prior experience of leading and mentoring senior engineers to deliver critical projects.
Nice-to-have
-
Prior experience developing DNS and related modules is a Plus.
-
Good understanding of network protocols like TCP, UDP, HTTP, SSL, DNS, FTP etc.
-
Experience with CI/CD (git, pipeline etc.).
Qualifications
-
Requires a minimum of 15+ years of related experience with a Bachelor of Engineering in ECE/Computers or similar years’ experience with ME/MTech in ECE/Computers.
-
Excellent organizational agility and interpersonal skills throughout the organization.
-
Ability to work flexible hours for better collaboration with international teams.
F5 Inc. is an equal opportunity employer and strongly supports diversity in the workplace. The Job Description is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change.
at Altimetrik
Big Data Engineer: 5+ yrs.
Immediate Joiner
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
- We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
About NxtWave
NxtWave is revolutionizing the 21st-century job market by transforming youth into highly skilled tech professionals irrespective of their educational background with our CCBP 4.0 programs.
By offering vernacular content and interactive learning, NxtWave is breaking the entry barrier for learning tech skills. Learning in mother tongue helps learners achieve higher comprehension, deeper attention, longer retention and greater outcomes.
NxtWave was founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur).
NxtWave now has paid subscribers from 250+ districts across India who have spent 200 million minutes on its learning platform so far. In the last 5 months, CCBP 4.0 learners have been hired by 1000 companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more.
Know more about NxtWave at ( https://www.ccbp.in/ )
Job Summary
As a Technical Content Lead at NxtWave, You will lead and collaborate with the team to develop, review and manage technical content in a variety of formats as needed (documents, videos, tutorials, etc). You contribute to the continuous improvement of our technical content process by developing, prioritizing, and maintaining a content strategy and plan
Responsibilities
-
Lead and Collaborate with the team to develop high quality technical content by setting goals and preparing phased rollout plans to thousands of CCBP users
-
Collaborate with different stakeholders to understand user needs, prioritize content milestones based on user feedback and product roadmap
-
Responsible for creating, developing, reviewing and managing technical content in a variety of formats as needed (documents, videos, tutorials, etc)
-
Ensure all content is up to standards by providing and receiving constructive feedback, make sure follow standards and guidelines
-
Contribute to the continuous improvement of our technical content process by developing, prioritizing, and maintaining a content strategy and plan
-
Lead technical discussions within the team and be the go-to person for the team for guidance and troubleshooting
-
Making effective use of resources during the various stages of the work by ensuring that business objectives are met and deliverables achieved to agreed time, and quality
-
Aligning to Agile principles, and adhering to Sprint methodology
-
Coordinate with other departments to ensure the right message is being communicated in formats that match audience's technical expertise
-
Gather and analyze feedback to identify gaps and enhance the content requirements based on priority
-
Qualifications and Skills
-
2 + years of experience in developing the industry-leading content in front-end(HTML/CSS/JS and React) or backend technologies (Python, SQL, NodeJS)
-
Strong and self motivated individual and should be able to independently drive decisions with excellent leadership qualities
-
Strong command of the English language: an eye for detail, excellent grammar, and proofreading skills
-
Ability to work in a fast-paced environment with a team and deliver high-quality work on tight timelines
-
Excellent attention to detail and the ability to prioritize and work on multiple requirements simultaneously
-
Proven ability in setting goals, and preparing phased rollout plans to thousands of CCBP users by discussing with multiple stakeholders and come up with an optimal approach with existing set of users
-
Excellent written and verbal communication skills with a proven ability to communicate complex topics in an understandable way
Qualities we'd love to find in you
-
The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality content
-
Strong collaboration abilities and a flexible & friendly approach to working with teams
-
Strong determination with a constant eye on solutions
-
Creative ideas with problem solving mind-set
-
Be open to receiving objective criticism and improving upon it
-
Eagerness to learn and zeal to grow
Overview
Location : Hyderabad
Experience : Min 2 years
Job Type : Full time
Working Days : 5 - Day Week
Compensation : 9 - 12 LPA
System Engineer (ECU)
Location: Hyderabad
Fulltime
Your tasks as Specialist System Engineer ECU(m/f):
- Analysis of the requirements for the electronics control of occupant safety systems in alignment to the customer specifications
- Definition of the System Requirements for the embedded system
- Ensuring quality requirements according to ASPICE and ASIL (ISO 26262)
- Definition of the design and architecture of the ECU system taking into account the latest technologies and continuous improvement ideas using SYSML
- Coordination and functional leadership of the project team from hardware and software development
Your profile as Specialist Systems Engineer ECU(m/f):
- Successfully completed engineering studies in electronics, embedded system, or software
- Several years of professional experience in the development of control units within the automotive industry
- Experience in product development according to ASPICE and ISO 26262
- Assertiveness, target-oriented and structured working methods, ability to work in a team
- Willingness of intercultural work environment.
--
Regards
Qcentrio I
at Persistent Systems
Responsibilities
- Develop process workflows for data preparations, modeling, and mining Manage configurations to build reliable datasets for analysis Troubleshooting services, system bottlenecks, and application integration.
- Designing, integrating, and documenting technical components, and dependencies of big data platform Ensuring best practices that can be adopted in the Big Data stack and shared across teams.
- Design and Development of Data pipeline on AWS Cloud
- Data Pipeline development using Pyspark, AWS, and Python.
- Developing Pyspark streaming applications
Eligibility
- Hands-on experience in Spark, Python, and Cloud
- Highly analytical and data-oriented
- Good to have - Databricks
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
- Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
- About 5 years of professional experience supporting AWS cloud environments
- Certified Amazon Architect Associate or Architect
- Experience serving as lead (shift management, reporting) will be a plus
- AWS Architect Certified Solution Architect Professional (Must have)
- Minimum 4yrs experience, maximum 8 years’ experience.
- 100% work from office in Hyderabad
- Very fluent in English
Excellent knowledge in Core Java (J2SE) and J2EE technologies.
Hands-on experience with RESTful services, API design are must.
Knowledge of microservices architecture is must.
Knowledge of design patterns is must.
Strong knowledge in Exception handling and logging mechanism is must.
Agile scrum participation experience. Work experience with several agile teams on an application built
with microservices and event-based architectures to be deployed on hybrid (on-prem/cloud)
environments.
Good knowledge of Spring framework (MVC, Cloud, Data and Security. Etc) and ORM framework like
JPA/Hibernate.
Experience in managing the Source Code Base through Version Control tool like SVN, GitHub,
Bitbucket, etc.
Experience in using and configuration of Continuous Integration tools Jenkins, Travis, GitLab, etc.
Experience in design and development of SaaS/PaaS based architecture and tenancy models.
Experience in SaaS/PaaS based application development used by a high volume of
subscribers/customers.
Awareness and understanding of data security and privacy.
Experience in performing Java Code Review using review tools like SonarQube, etc.
Good understanding of end-to-end software development lifecycle. Ability to read and understand
requirements and design documents.
Good Analytical skills and should be self-driven.
Good communication with inter-personal skills.
Open for learning new technologies and domain.
A good team player and ready to take up new challenges. Active communication and coordination with
Clients and Internal stake holder
Requirements: Skills and Qualifications
6-8 years of experience in developing Java/J2EE based Enterprise Web Applications
Languages: Java, J2EE, and Python
Databases: MySQL, Oracle, SQL Server, PostgreSQL, Redshift, MongoDB
DB Script: SQL and PL/SQL
Frameworks: Spring, Spring Boot, Jersey, Hibernate and JPA
OS: Windows, Linux/Unix.
Cloud Services: AWS and Azure
Version Controls/ Devops tools: Git, Bitbucket and Jenkins.
Message brokers: RabbitMQ, and Kafka
Deployment Servers: Tomcat, Docker, and Kubernetes
Build Tools: Gradle/Maven
at Altimetrik
-Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight.
-Experience in developing lambda functions with AWS Lambda.
-
Expertise with Spark/PySpark
– Candidate should be hands on with PySpark code and should be able to do transformations with Spark
-Should be able to code in Python and Scala.
-
Snowflake experience will be a plus
at Altimetrik
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
Good Experience & Understanding of CRM & ERP processes & Design
- 3+ years of hands-on experience in ERPNext Platform and Frappe Framework, Python, JavaScript and MySQL/MariaDB
- Strong knowledge of configuration and customisation of the ERPNext platform
at Altimetrik
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
Altimetrik
Big data Developer
Exp: 3yrs to 7 yrs.
Job Location: Hyderabad
Notice: Immediate / within 30 days
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
What are the Key Responsibilities:
- Design NLP applications
- Select appropriate annotated datasets for Supervised Learning methods
- Use effective text representations to transform natural language into useful features
- Find and implement the right algorithms and tools for NLP tasks
- Develop NLP systems according to requirements
- Train the developed model and run evaluation experiments
- Perform statistical analysis of results and refine models
- Extend ML libraries and frameworks to apply in NLP tasks
- Remain updated in the rapidly changing field of machine learning
What are we looking for:
- Proven experience as an NLP Engineer or similar role
- Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling
- Ability to effectively design software architecture
- Deep understanding of text representation techniques (such as n-grams, a bag of words, sentiment analysis etc), statistics and classification algorithms
- Knowledge of Python, Java, and R
- Ability to write robust and testable code
- Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like sci-kit-learn)
- Strong communication skills
- An analytical mind with problem-solving abilities
- Degree in Computer Science, Mathematics, Computational Linguistics, or similar field
What are the Key Responsibilities:
- Responsibilities include writing and testing code, debugging programs, and integrating applications with third-party web services.
- Write effective, scalable code
- Develop back-end components to improve responsiveness and overall performance
- Integrate user-facing elements into applications
- Improve functionality of existing systems
- Implement security and data protection solutions
- Assess and prioritize feature requests
- Creates customized applications for smaller tasks to enhance website capability based on business needs
- Ensures web pages are functional across different browser types; conducts tests to verify user functionality
- Verifies compliance with accessibility standards
- Assists in resolving moderately complex production support problems
What are we looking for:
- 3+ Years of work experience as a Python Developer
- Expertise in at least one popular Python framework: Django
- Knowledge of NoSQL databases (Elastic search, MongoDB)
- Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
- Familiarity with Apache Kafka will give you an edge over others
- Good understanding of the operating system and networking concepts
- Good analytical and troubleshooting skills
- Graduation/Post Graduation in Computer Science / IT / Software Engineering
- Decent verbal and written communication skills to communicate with customers, support personnel, and management
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
We have openings for Fullstack / Backend / Frontend Developers who can write reliable, scalable, testable and maintainable code.
At Everest, we innovate at the intersection of design and engineering to produce outstanding products. The work we do is meaningful and challenging - which makes it interesting. Imagine each line of your code, making the world a better place. We work on five workdays weeks, and overtime is a rarity. If clean architecture, TDD, DDD, DevOps, Microservices, Micro-frontends, scalable systems resonate with you, please apply.
To see the quality of our code, you can checkout some of our open source projects: https://github.com/everest-engineering
If you want to know more about our culture:
https://github.com/everest-engineering/manifesto
Some videos that can help:
https://www.youtube.com/watch?v=A7y9RpqXAdA;
- Passion to own and create amazing product.
- Should be able to clearly understand the customer's problem.
- Should be a collaborative problem solver.
- Should be able a team player.
- Should be open to learn from others and teach others.
- Should be a good problem solver.
- Should be able to take feedback and improve continuously.
- Should commit to inclusion, equity & diversity.
- Should maintain integrity at work
-
Familiarity with Agile methodologies and clean code.
-
Design and/or contribute to client-side and server-side architecture.
-
Well versed with fundamentals of REST.
-
Build the front-end of applications through appealing visual design.
-
Knowledge of one or more front-end languages and libraries (e.g. HTML / CSS, JavaScript, XML, jQuery, Typescript) JavaScript frameworks (e.g. Angular, React, Redux, Vue.js)
-
Knowledge of one or more back-end languages (e.g. C#, Java, Python, Go, Node.js and frameworks like SpringBoot, .NET Core)
-
Well versed with fundamentals of database design.
-
Familiarity with databases - RDBMS like MySQL, Postgres & NoSQL like MongoDB, DynamoDB.
-
Well versed with one or more cloud platforms like - AWS, Azure, GCP.
-
Familiar with Infrastructure as Code - CloudFormation & Terraform & deployment tools like Docker, Kubernetes.
-
Familiarity with CI/CD tools like Jenkins, CircleCI, Github Actions..
-
Unit testing tools like Junit, Mockito, Chai, Mocha, Jest