

We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail

Similar jobs


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- AI/ML Engineer / Intern - Python, Fast API, Flask/Django, PyTorch, TensorFlow, Scikit-learn, GenAI Tools
Apply Now: https://links.acciojob.com/44MJQSB
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/44MJQSB
Important: Please bring your laptop & earphones for the test.
We are looking for a solid firmware testing professional who can intuitively develop test plans and test tools as needed.
This individual will be accountable for driving day-to-day execution and must be able to understand the existing system and its implementation, then architect test strategy and tools to ensure solid deliverables for external customers.
Design and drive integration and test effort Work with system/software/hardware engineers to understand system requirements and limitations to see through what needs to be tested. Prioritize test effort and development as our deliverable plans may be adjusted Proactively learn our system design and implementation to drive test effort along with each milestone.
Required Skills:
Proven experience in test-in-loop development (SIL, HIL).
Proficiency with VISA instrument drivers.
Hands-on experience with lab equipment, especially Mixed Domain Oscilloscopes (MDO).
Strong expertise in embedded systems, including hardware and software components.
Proficiency in Python for scripting and automation of test processes.
Familiarity with Jenkins or other CI/CD frameworks (Azure preferred).
Experience with source control systems, particularly GIT.
General experience in writing and debugging tests.
Background in networking is preferred.
Exposure to DevOps practices and SQL is an advantage.
Qualifications:
B.Tech. (Electronics, E&E, CSE), M.Tech (Electronics), Phd (Electronics)
Other Requirements:
H1B Visa ready
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
Your key responsibilities
- Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
- Responsible for development, support, maintenance, and implementation of a complex project module
- Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
- Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
- Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
- complete reporting solutions.
- Preparation of HLD about architecture of the application and high level design.
- Preparation of LLD about job design, job description and in detail information of the jobs.
- Preparation of Unit Test cases and execution of the same.
- Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle
Skills and attributes for success
- Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
- Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
- Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
- Should have enough experience to work on Power Shell Scripting
- Able to guide the team through the development, testing and implementation stages and review the completed work effectively
- Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
- Primary owner of delivery, timelines. Review code was written by other engineers.
- Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
- Must have understanding of business intelligence development in the IT industry
- Outstanding written and verbal communication skills
- Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
- Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
- Should be able to orchestrate and automate pipeline
- Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark
To qualify for the role, you must have
- Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
- More than 6 years of experience in ETL development projects
- Proven experience in delivering effective technical ETL strategies
- Microsoft Azure project experience
- Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)
Ideally, you’ll also have


We at Convrse are revolutionizing the way, 'The Metaverse' is going to be built. We aim to become the world’s largest library for sourcing interoperable 3D spaces for building all kinds of metaverse experiences.
We are searching for an innovative technical lead. to join our company. As the technical lead, you will oversee the company’s technical team and all projects they undertake, analyze briefs, write progress reports, identify risks, and develop work schedules. You should be able to work with your team and inspire them to reach their goals.
To be successful as a technical lead, you should always be expanding your industry knowledge and be able to quickly identify problems. Outstanding technical leads are accountable, trustworthy, and able to build lasting relationships with their teams.
Technical Lead Responsibilities:
-
Determining project requirements and developing work schedules for the team.
-
Delegating tasks and achieving daily, weekly, and monthly goals.
-
Liaising with team members, management, and clients to ensure projects are completed to standard.
-
Identifying risks and forming contingency plans as soon as possible.
-
Analyzing existing operations and scheduling training sessions and meetings to discuss improvements.
-
Keeping up-to-date with industry trends and developments.
-
Updating work schedules and performing troubleshooting as required.
-
Motivating staff and creating a space where they can ask questions and voice their concerns.
-
Being transparent with the team about challenges, failures, and successes.
10. Writing progress reports and delivering presentations to the relevant stakeholders.
Technical Lead Requirements:
-
Bachelor’s/Master's degree in computer science, engineering, or a related field.
-
A 3-4 years of experience in a similar role would be advantageous.
-
In-depth experience in working with Reactjs and Go.
-
Excellent technical, diagnostic, and troubleshooting skills.
-
Experience with blockchain technology preferable.
-
Strong leadership and organizational abilities.



