About the role:As an Engineering Manager, your role would involve architecting systems capable of serving as the brains of complex distributed products. In addition, you’d also closely Managing engineers on the team and contribute to team building. A strongtechnologist at Meesho cares about code modularity, scalability, re-usability andthrives in a complex and ambiguous environment. Required skill & Experience: Bachelors / Masters in Computer Science or equivalent from a premier institutewith at least 8+ years over all professional experience. At-least 2+ yearsexperience in managing/leading software development teams. Create clear career paths for team members and help them grow with regular &deep mentoring. Perform regular performance evaluation and share and seekfeedback. Able to drive sprints and OKRs. Exceptional team managing skills; experience in building large scale distributedSystems Experience with object-oriented/object function scripting languages: Python, Java, Scala,etc. Expertise in big data tools like Hadoop, Spark, Kafka, fink, Hive, Sqoop etc. Deep understanding of SQL and NoSQL databases like Mysql, Postgres , Mongodb and Cassandra. Good experience on cloud infrastructure - AWS preferably - EC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systems like: Storm, Spark-Streaming, Flinketc. Ability to think and analyze both breadth-wise and depth-wise while designingand implementing services Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
Job Description Be a part of the team that develops and maintains the analytics and data science platform. Perform functional, technical, and architectural role and play a key role in evaluating and improving data engineering, data warehouse design and BI systems. Develop technical architecture designs which support a robust solution and leads full-lifecycle availability of real-time Business Intelligence (BI) and enable the Data Scientists Responsibilities ● Construct, test and maintain data infrastructure and data pipelines to meet business requirements ● Develop process workflows for data preparations, modelling and mining Manage configurations to build reliable datasets for analysis Troubleshooting services, system bottlenecks and application integration ● Designing, integrating and documenting technical components, dependencies of big data platform Ensuring best practices that can be adopted in Big Data stack and share across teams ● Working hand in hand with application developers and data scientists to help build softwares that scales in terms of performance and stability Skills ● 3+ years of experience managing large scale data infrastructure and building data pipelines/ data products. ● Proficient in - Any data engineering technologies and proficient in AWS data engineering technologies is plus. ● Language - python, scala or go ● Experience in working with real time streaming systems Experience in handling millions of events per day Experience in developing and deploying data models on Cloud ● Bachelors/Masters in Computer Science or equivalent experience Ability to learn and use skills in new technologies
Key Responsibilities: Drive discussions to create/improve the product, process, and technology Build abstractions and contracts with separation of concerns for a larger scope Rapid prototyping Robust and scalable web-based applications Do high-level design with guidance; Functional modeling, break-down of a module Thinking platforms & reuse, the open-source contribution will be a plus Do incremental changes to architecture: impact analysis of the same Do performance tuning and improvements in large scale distributed systems Mentor young minds and foster team spirit. Desired Skills : Extensive and expert programming experience in any one programming language (strong OO skills preferred). Deep experience in at least one general programming language (Java, Ruby, Clojure, Scala, C/C++, and SQL) A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design. Have a penchant for solving complex and interesting problems BE/BTech, MTech in Computer Science or related technical discipline (or equivalent). Excellent coding skills – should be able to convert the design into code fluently Good skills to write unit & integration tests with reasonable coverage of code & interfaces - TDD is a plus Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Experience with multi-threading and concurrency programming Ability to switch between the technologies and learn new skills on the go.
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.
Greetings! Samsung R&D Institute India-Bangalore (SRI-B) is hiring experienced software professionals. Details are as below: Samsung R&D Institute India-Bangalore (SRI-B) is the largest R&D Center outside of South Korea and a key innovation hub in the Samsung group. With the best of talent from India and overseas, our focus is on creating cutting edge technologies across multiple areas of Samsung’s business, that transform experiences of users both globally, as well as in local markets.Current Opportunities:Qualified Engineers will be hired against roles which includes Artificial Intelligence, Big Data, Machine Learning, Data Science, Analytics, Enterprise & IOT Solutions, Wearable computing, multimedia systems,3GPP, 4G/5G, Network,Modem,protocols,RTL, PHY, Android/Tizen Platforms, Healthcare/Medical solutions, Natural Language Processing, Computer vision, Image Processing, Computer Architect.EDUCATION- Minimum 60% in BE, B.Tech, ME, M.Tech, PhD or MCA WORK EXPERIENCE - Minimum 1 year PROGRAMMING SKILLS Any of the following: C,C++ Java Python, Java Script, JSON, XML – Jquery, Spring, Struts, Hibernate, iBatis, Node.js, Memcache/Redis, Cassandra/Hbase, MongoDB/CouchDBMap Reduce, Hadoop, Spark, Hive, Mahout, Fast Data Processing – Storm – Rules Engine – Drools GENERAL Strong problem solving skills, analytical skills and trouble shootingGood understanding of algorithms, data structures and performance optimization techniquesHands on with Design, Coding, Debugging and TestingExcellent communication & interpersonal skills; Team player. PS: Please do share this opportunity with your colleagues and friends
Develop software solutions by studying information needs; studying systems flow, data usage and work processes; investigating problem areas; Determine operational feasibility by evaluating analysis, problem definition, requirements, solution development and proposed solutions Improve operations by conducting systems analysis; recommending changes in policies and procedures Mentor junior and mid-level engineers Collaborate with team to brainstorm and design new features Grow engineering teams by interviewing, recruiting and hiring Make informed decisions quickly and taking ownership of services and applications at scale Work collaboratively with others to achieve goals Passionate about great technologies, especially open source Understand business needs and know how to create the tools to manage them
About Artivatic :- Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. The artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly. - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. - We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.Skills : - Building server-side logic that powers our APIs, in effect deploying machine learning models in production system that can scale to billions of API calls - Scaling and performance tuning of database to handle billions of API calls and thousands of concurrent requests - Collaborate with data science team to build effective solutions for data collection, pre-processing and integrating machine learning into the workflow - Collaborate, provide technical guidance, and engage in design and code review for other team members. - Excellent Scala, cassandara, architect, api, software, python, Java programming and software design skills, including debugging, performance analysis and test design - Proficiency with at least one Scala, GoLang, Python micro frameworks like Flask, Tornado, Play, Spring etc. with experience in building REST APIs - Experience or understanding in building web crawlers, data fetching bots etc. - Experience with design and optimisation of Neo4j, cassandra, NoSQL databases, PostGreSQL, Redis, Elastic Search - Familiarity with one of the cloud service providers, AWS or Google Compute Engine - Computer Science degree with 4+ years of backend programming experience Experience : 3 Years+ Location : Sony World Signal, Koramangala 4th Block, Bangalore