About Ezeiatech systems
Similar jobs
About the role
Checking quality is one of the most important tasks at Anakin. Our clients are pricing their products based on our data, and minor errors on our end can lead to our client's losses of millions of dollars. You would work with multiple tools and with people across various departments to ensure the accuracy of the data being crawled. You would setup manual and automated processes and make sure they run to ensure the highest possible data quality.
You are the engineer other engineers can count on. You embrace every problem with enthusiasm. You remove hurdles, are a self-starter, team player. You have the hunger to venture into unknown areas and make the system work.
Your Responsibilities would be to:
- Understand customer web scraping and data requirements; translate these into test approaches that include exploratory manual/visual testing and any additional automated tests deemed appropriate
- Take ownership of the end-to-end QA process in newly-started projects
- Draw conclusions about data quality by producing basic descriptive statistics, summaries, and visualisations
- Proactively suggest and take ownership of improvements to QA processes and methodologies by employing other technologies and tools, including but not limited to: browser add-ons, Excel add-ons, UI-based test automation tools etc.
- Ensure that project requirements are testable; work with project managers and/or clients to clarify ambiguities before QA begins
- Drive innovation and advanced validation and analytics techniques to ensure data quality for Anakin's customers
- Optimize data quality codebases and systems to monitor the Anakin family of app crawlers
- Configure and optimize the automated and manual testing and deployment systems used to check the quality of billions of data points of over 1000+ crawlers across the company
- Analyze data and bugs that require in-depth investigations
- Interface directly with external customers including managing relationships and steering requirements
Basic Qualifications:
- 2+ years of experience as a backend or a full-stack software engineer
- Web scraping experience with Python or Node.js
- 2+ years of experience with AWS services such as EC2, S3, Lambda, etc.
- Should have managed a team of software engineers
- Should be paranoid about data quality
Preferred Skills and Experience:
- Deep experience with network debugging across all OSI layers (Wireshark)
- Knowledge of networks or/and cybersecurity
- Broad understanding of the landscape of software engineering design patterns and principles
- Ability to work quickly and accurately in a highly stressful environment during removing bugs in run-time within minutes
- Excellent communicator, both written and verbal
Additional Requirements:
- Must be available to work extended hours and weekends when needed to meet critical deadlines
- Must have an aversion to politics and BS. Should let his/her work speak for him/her.
- Must be comfortable with uncertainty. In almost all the cases, your job will be to figure it out.
- Must not be bounded to comfort zone. Often, you will need to challenge yourself to go above and beyond.
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyze medical images and create 2.0 of advanced radiologist reporting.
We are backed by YCombinator and other marquee investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Join us, if you find this as exciting as we do!
Description:
Synapsica is looking for a Node.js/Python developer who is passionate about designing and implementing scalable solutions with highest quality. You will be responsible for creating high performance, responsive, and secure server-side programs to manage Synapsica platform. You will be responsible for designing, managing and supporting cloud platform and scaling our database. This role is ideal for you if you have a background in bakcned development and are looking for the next level of career growth in a fast paced, learning based and merit driven work environment.
Responsibilities:
- Design and development of our new platform modules from scratch.
- Hands on implementation of our APIs and integrations.
- Design, scaling and integration of new/existing databases.
- Ensuring the entire stack is designed and built for speed and scalability
- Design and construction of our REST APIs with best secutiry mechanisms.
- Design and implementation of continuous integration and deployment pipelines in tandem with the respective team members.
- Performance tuning and improvements in large scale systems.
- Ensuring responsiveness and cross-platform compatibility of applications.
- Owning and delivering end-to-end products, features, enhancements.
Requirements:
- Degree in Computer Science or related discipline with strong competencies in data structures, algorithms, and software design
- At least 4+ years of experience with writing Python/NodeJS/PHP/Ruby on Rails.
- Prior experience of database design and management in MongoDB, including being up on the latest practices and associated versions.
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data.
- Familiarity with AWS (or equivalent) ecosystem with end-to-end cloud deployment.
- Experience implementing code level unit tests.
- Proficiency with Git / Version control.
- Appreciation for clean and well documented code
- Excellent verbal communication skills.
- Good problem solving skills.
- Attention to detail.
- Very high sense of ownership.
- Deep interest and passion for technology
● Should have 6+ years of experience as a software developer
● Good experience with Nodejs & Express js.
● Good with MySQL
● Experience with Linux Scripting
● Experience with elastic search and Logstash
● Experience with cyber security
● Great interpersonal and communication skills candidate can work independently
● Agile project methodology experience preferred.
● Excellent communication skills.
● Critical thinker and good problem-solver.
● Good time-management skills
● Great interpersonal skills
Working Timings: 3:00 pm to 12:00 am (midnight)IST
We are looking for an experienced engineer with superb technical skills. You will primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions. The successful candidate will be curious, creative, ambitious, self motivated, flexible, and have a bias towards taking action. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Experience
12+ Years
Location
Hyderabad
Skills
Bachelors/Masters/Phd in CS or equivalent industry experience
10+ years of industry experience in java related frameworks such as Spring and/or Typesafe
Experience with scripting languages. Python experience highly desirable. 5+ Industry experience in python
Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
Demonstrated expertise of building and shipping cloud native applications
Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
Experience in API development using Swagger
Strong expertise with containerization technologies including kubernetes, docker-compose
Experience with cloud platform services such as AWS, Azure or GCP.
Implementing automated testing platforms and unit tests
Proficient understanding of code versioning tools, such as Git
Familiarity with continuous integration, Jenkins
Responsibilities
Architect, Design and Implement Large scale data processing pipelines
Design and Implement APIs
Assist in dev ops operations
Identify performance bottlenecks and bugs, and devise solutions to these problems
Help maintain code quality, organization, and documentation
Communicate with stakeholders regarding various aspects of solution.
Mentor team members on best practices