Our client is a wellness product and solutions startup that is dedicated to helping people live a healthy and fulfilling life. It is an umbrella brand housing multiple brands that will fulfill all aspects of wellness. Living a healthy lifestyle is a challenge in terms of accessing quality products at an affordable price and being able to find enough time to truly benefit from it. The startup is founded with the vision of offering a trusted online wellness platform and make quality health solutions available to meet the diverse requirements of the consumer. The first step towards wellbeing begins with the products consumed at home. They aim to offer high-end customer-oriented solutions to assist the consumers to live a healthy lifestyle. The startup is founded by an IIM-Ahmedabad and IIM-Khozikode alumni. The founders of the well-funded startup are venture capitalists and are driven by the passion for wellness. As a Backend Developer, you will be a part of the backend team and responsible for architecting and setting the standards for building our core backend/ microservices. What you will do: Shaping up the entire system for scale and collaborating intensively with the frontend and design teams to create the best consumer experiences. Developing microservices that will be used by the frontend through API endpoints. Translating business requirements into high quality code. Ensuring that the code is deployed in a secure and scalable fashion. Focusing on code maintainability and performance of application. Providing technical advice and assists in solving programming problems. Enhancing Analytics and overall backend architecture for better performance. What you need to have: BE / BTech with at least 1 year's experience on NodeJS Practical experience in building APIs Experience with RabbitMQ or similar queuing system Experience with Redis/ ElasticSearch is a plus Familiarity with CI/ CD pipeline tools like Jenkins Practical experience with Git A knack for benchmarking and optimization Experience with AWS services is a plus
The Job:- Provision and maintain a highly available and scalable Kubernetes based microservice environment on AWS and on-prem infrastructure.- Design, build and implement complex multi-region Kubernetes architectures.- Create deployment processes that improve engineering ease and efficiency.- Design continuous integration systems that scale.- Build monitoring systems that can handle a large set of metrics/day.Skills:- Experience in building and maintaining Kubernetes environments. - Experience in building AWS public and private VPC's with security and networking best practices.- Experience writing shell scripts in Bash, Perl, Python, etc.- Familiarity with scalable networking technologies such as AWS ELB, NGINX, HAProxy, etc.- Understanding of the OSI model. Knowledge of network routing & subnetting.- Expertise with Docker and Container Orchestration (ECS, Kubernetes, or Swarm)- Experience maintaining Linux systems- Cloud automation tool experience - Terraform, Cloudformation Templates, etc.- Experience with configuration management tools like Ansible, CFEngine, Puppet, etc.- Knowledge of CI tools like Github, Jenkins, Bamboo, etc.- Knowledge of Logging, Monitoring, & Visualization tools like ELK, Grafana, Prometheus, check_mk, Nagios, etc.- Familiarity with standard security practices such as encryption, certificates, and key management- Fundamental understanding of modern cloud architecture (Virtualization, Databases, message queues, etc.Differentiators- Extensive Kubernetes and AWS knowledge.- Experience building scalable, highly performant systems and services- Outstanding communication skillsCandidates with AWS/Kubernetes Certifications will be preferred.
gingerCube Inc., based in Dallas, USA, provides a software platform to interface with hospitals, physician groups, resident programs, and billing services, improving revenue cycle and reducing time to bill by up to 85%. Our flagship product, maxRVU has been adopted by physician groups, hospitals and revenue cycle companies across the US. Over the past 8 years, we have developed a solid brand and continue to grow swiftly with top healthcare systems. To meet rapid expansion, we are looking to hire a full-time Lead Engineer to join our growing team in Mumbai to build our products using Android SDK Stack As our Lead Engineer, you will be responsible for creating innovative solutions to technical problems and deliver a top-notch user-experience to our enterprise customers. You must be highly motivated, lead by example and have a knack for designing optimized solutions. Responsibilities Translate designs and wireframes into high quality code Design, build, and maintain high performance, reusable, and reliable Java code Ensure the best possible performance, quality, and responsiveness of the application Maintain code quality, organization, and automatization Skills Strong knowledge of Android SDK, different versions of Android, and how to deal with different screen sizes Familiarity with RESTful APIs to connect Android applications to back-end services Strong knowledge of Android UI design principles, patterns, and best practices Ability to implement AWS & Google cloud services Experience with offline storage, threading, and performance tuning Ability to design applications around natural user interfaces, such as “touch” Familiarity with the use of additional sensors, such as gyroscopes and accelerometers Knowledge of the open-source Android ecosystem and the libraries available for common tasks Ability to understand business requirements and translate them into technical requirements Familiarity with cloud message APIs and push notifications A knack for benchmarking and optimization Understanding of Google’s Android design principles and interface guidelines Proficient understanding of code versioning tools, such as Git Familiarity with continuous integration
Who are we? BlueOptima is the only company providing reliable, objective software development productivity metrics. The technology has been implemented by some of the world’s largest organizations including insurance companies, telecoms and seven of the world’s top ten Universal Banks. BlueOptima is a Private Limited company, incorporated in 2007 and based in London. BlueOptima is an Equal Opportunities employer. Whom are we looking for? We are hiring for a System Administrator to join our growing company and be a part of our success story. We are looking for a talented System Administrator with deep experience in AWS and managing large number of Linux servers used for hosting purposes, to join our research and development team in India. What does the role involve? Manage AWS services used, covering following (but not limited to) ec2 instances and network configurations IAM policies. Security perimeter on ec2, s3 and VPC Design, install, configure, and maintain PostgreSQL database clusters. Respond to and resolve database access, performance, and other issues. Collaborate with engineering teams to optimize database usage. Ability to work non-peak hours when needed and for deployment and other configuration changes which might affect service availability. Using tools like Puppet and Terraform for application deployment and management. Maintaining monitoring tools, e.g. Cacti. Configuring and Maintaining ELK server for log aggregation and alerting. Maintaining VPN and NIDS services running in VPC The role is based in India. Why work for us? Compensation is higher than market salary Stimulating challenges that fully use your skills, e.g. real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice (e.g. Macbook Pro or Dell XPS) Our fast-growing company offers the potential for rapid career progression More about working at BlueOptima Required Skills B.Sc. in Computer Science, Information Systems or related technical degree; or equivalent combination of education and experience. Proficiency in scripting languages like Python, Shell, etc. Experience with ELK or Splunk. With strong understanding of both structured and unstructured data. Experience with using automation tools like Puppet and Terraform. Excellent oral and written communication skills Good understanding on Networking and packet capturing and analysis Ability to troubleshoot problems remotely and VPN/network connectivity issues from different OS environments
Job Overview :Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.Responsibilities and Duties :- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutionsEducation level :- Bachelor's degree in Computer Science or equivalentExperience :- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development- Expertise in application, data and infrastructure architecture disciplines- Expert designing data integrations using ETL and other data integration patterns- Advanced knowledge of architecture, design and business processes Proficiency in :- Modern programming languages like Java, Python, Scala- Big Data technologies Hadoop, Spark, HIVE, Kafka- Writing decently optimized SQL queries- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensionalmodeling, and Meta data modeling practices.- Experience generating physical data models and the associated DDL from logical data models.- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,and data rationalization artifacts.- Experience enforcing data modeling standards and procedures.- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goalsSkills :Must Know :- Core big-data concepts- Spark - PySpark/Scala- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)- Handling of various file formats- Cloud platform - AWS/Azure/GCP- Orchestration tool - Airflow
Job Overview :Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.Responsibilities and Duties :- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutionsEducation level :- Bachelor's degree in Computer Science or equivalentExperience :- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development- Expertise in application, data and infrastructure architecture disciplines- Expert designing data integrations using ETL and other data integration patterns- Advanced knowledge of architecture, design and business processes Proficiency in :- Modern programming languages like Java, Python, Scala- Big Data technologies Hadoop, Spark, HIVE, Kafka- Writing decently optimized SQL queries- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensionalmodeling, and Meta data modeling practices.- Experience generating physical data models and the associated DDL from logical data models.- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,and data rationalization artifacts.- Experience enforcing data modeling standards and procedures.- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goalsSkills :Must Know :- Core big-data concepts- Spark - PySpark/Scala- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)- Handling of various file formats- Cloud platform - AWS/Azure/GCP- Orchestration tool - Airflow
Job Description :- 2-8 years of experience in Java development.- Knowledge of any Cloud technologies(AWS/Azure/GCP) would be preferred.- Knowledge of Kubernetes/Docker is preferred.- Experience with micro services based architecture is plus.- Should be got at problem solving and logic buildingAbout Hurain infotech LLP :Hurain Infotech LLP is a startup founded in January 2020, providing elegant IT solutions to its clients.Working days - Monday to Friday.Flexible timings.
Architectural: Design and implement a software architecture Select a technology stack Design and configure infrastructure Select a development toolkit (IDE, etc.) Design and implement a database design Improve and optimise the application architecture Ensure scalability of the application Ensure scalability of the infrastructure Explore new technologies and decide whether to implement them Strategy, planning, and design: Take end-to-end ownership of the product, identify technology requirements, define the future product vision, create preliminary design concepts for add-on modules and shape overall technology and product roadmap by collaborating with the founders, business development, and marketing team. Ensure user oriented design is the primary approach to product development across multiple screens, based on user behaviour data and direct customer feedback. Implementation and deployment: Manage Product Release, QA cycles, feature implementation and on time delivery through in-house team and vendors. Collaborate with team and customers to define use cases. Creation of wireframes/prototypes, site maps and user-flows for web and mobile platforms. Operational management : Support marketing by implementing technical requirements for SEO/product analytics. Establish and supervise a quality assurance process, including integration & system testing. Rigorously monitor key performance metrics and coordinate with various teams to take corrective actions if needed. Establish and forecast ROI of features and succinctly articulate competitive advantage. Set-up data collection and analysis system in collaboration with CEO to track key performance. metrics. Strong fundamentals in computer science/engineering and algorithm design. Practical knowledge of computer software algorithms in machine/deep learning, NLP, Computer Vision etc. Personal Requirements: Min of 7+ years of Hands on experience in Web app development, payment gateways implementation, architecture design, product management, databases and UI/UX in consumer facing applications. Experience on projects involving engineering and algorithmic functions, machine learning, deep learning and artificial intelligence is very advantageous. Creative self-starter who is comfortable with both taking initiative and working in teams. Strong communication skills. Willingness to learn and utilise emerging technologies. Sincere passion to use disruptive technologies that can be globally significant
We are an IIT Bombay incubated, early stage healthcare start up developing a mobile based Ai technology to help reduce health risks for women during their pregnancy. Our Founders are Harvard and Columbia University alums with extensive experience in digital health in the US and India. Role: Back end developer ( Python + Flask/Django, Location Mumbai, Marol): Must have: 1-4 years hands on experience in Core Python with preferably Flask framework for complex web and mobile applications. Deployment experience Must have experience with Mongodb, MySQL and AWS Experience in Redis, Aerospike Good command of API development and deployment ( G Unicorn, Supervisor, Jenkins) Algorithms, system design, OOPS concepts. Agility and flexibility to in start up environement with focus on customer, excellent communication skills, and high level of responsibility and responsiveness. Project management, scheduling, allocation and delivery. Proven experience in agile development, sprint planning and backlog management. Good to Have: Experience in Machine Learning, Dialogflow or Rasa for NLP,
What's the role? Your role as a Principal Engineer will involve working with various team. As a principal engineer, will need full knowledge of the software development lifecycle and Agile methodologies. You will demonstrate multi-tasking skills under tight deadlines and constraints. You will regularly contribute to the development of work products (including analyzing, designing, programming, debugging, and documenting software) and may work with customers to resolve challenges and respond to suggestions for improvements and enhancements. You will setup the standard and principal for the product he/she drives. Setup coding practice, guidelines & quality of the software delivered. Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions. Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code. Prepares and installs solutions by determining and designing system specifications, standards, and programming. Improves operations by conducting systems analysis; recommending changes in policies and procedures. Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations. Protects operations by keeping information confidential. Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle. Who are you? You are a go-getter, with an eye for detail, strong problem-solving and debugging skills, and having a degree in BE/MCA/M.E./ M Tech degree or equivalent degree from reputed college/university. Essential Skills / Experience: 10+ years of engineering experience Experience in designing and developing high volume web-services using API protocols and data formats Proficient in API modelling languages and annotation Proficient in Java programming Experience with Scala programming Experience with ETL systems Experience with Agile methodologies Experience with Cloud service & storage Proficient in Unix/Linux operating systems Excellent oral and written communication skills Preferred: Functional programming languages (Scala, etc) Scripting languages (bash, Perl, Python, etc) Amazon Web Services (Redshift, ECS etc)
About Us:We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes. This is a role for highly technical engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, frameworks, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals. What you will do at Episource: You will be responsible for setting an agenda to develop and build machine learning platforms that positively impact the business, working with partners across the company including operations and engineering. You will be working closely with the machine learning team to design and implement back end components and services. You will be evaluating new technologies, enhancing the applications, and providing continuous improvements to produce high quality software. Required Skills: Strong background in analytics, BI or data science deployments is preferable with 2-6 years of experience Knowledge of React/Vue, HTML, CSS Experience building and consuming APIs Experience with MySQL, MongoDB and MEAN stack Knowledge and experience with serverless architectures is a plus Hands-on experience with AWS or any major cloud service provider for deploying solutions Experience with Docker or Kubernetes in deploying solutions on the cloud Hands-on experience Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets. Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling